Functional Behavioural States and Potential Health Condition Monitoring for Older Adults Using a Camera

Date: 
Thu, 27/03/2025 - 13:00
Location: 
Informatics Forum. G.03
Speaker: 
Longfei Chen
University of Edinburgh

Abstract: Vision-based methods have been shown to achieve clinically acceptable accuracy in measuring patient mobility. In this work, the daily behaviours and mobility of older adults are monitored using a privacy-preserving camera in their homes in Edinburgh for several weeks. The monitoring approach includes a discriminative model to classify individuals' Functional Behavioural States (postural-mobility features), and a personalized generative model to describe each individual’s statistical patterns over time.

The Ingredients for Efficient Robot Learning and Exploration

Date: 
Thu, 20/03/2025 - 13:00
Location: 
Informatics Forum. G.07
Speaker: 
Rika Antonova
University of Cambridge

Abstract: In this talk, I will outline ingredients for enabling efficient robot learning. First, I will demonstrate how large vision-language models can enhance scene understanding and generalization, allowing robots to learn general rules from specific examples for handling everyday objects. Then, I will describe a policy learning method that leverages equivariance to significantly reduce the amount of training data needed for learning from human demonstrations. Moving beyond learning from demonstrations, we will explore how simulation can enable robots to learn autonomously.

The Reward Hypothesis

Date: 
Tue, 11/03/2025 - 10:00
Location: 
Informatics Forum. G.03
Speaker: 
David Abel

Abstract: The reward hypothesis states "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." In this talk, I present our recent line of work that aims to settle this hypothesis---we prove that, under a particular interpretation, the reward hypothesis is true if and only if a set of five conditions hold. I then explore some of the consequences of this finding for the design of learning agents and their goals.

Hydraulic Power for the Digital Age

Date: 
Wed, 12/03/2025 - 19:00 to 20:00
Location: 
The Royal Scots Club, 30 Abercromby Place, Edinburgh, EH3 6QE
Speaker: 
Dr Niall Caldwell
Director of Flowcopter

Description: Hydraulic actuation is used wherever high forces are needed in compact packages, including heavy offroad machinery, industrial equipment and aircraft. But this multi-billion dollar industry has a guilty secret – despite over a century of incremental improvement, today’s hydraulic systems are still based on legacy analog architectures, wasting as much as 70% of the energy used to power them… causing millions of tonnes of needless CO2 emissions per year.

Is bee dance a language?

Date: 
Thu, 06/03/2025 - 13:00
Location: 
Informatics Forum. G.03
Speaker: 
Barbara Webb
University of Edinburgh

Bees are the only species other than humans that have been shown to be capable of communicating detailed information about their spatial surroundings to conspecifics. By performing a stereotyped 'dance' in the nest, the bee provides its nestmate with the spatial coordinates of a distant food source, such that the nestmate can subsequently travel there independently. This behaviour has famously been described as a language, and has been studied behaviourally for many years, but only recently have plausible neural models to explain this behaviour been proposed.

Fast Flow-based Visuomotor Policies via Conditional Optimal Transport Couplings

Date: 
Thu, 27/02/2025 - 13:00
Location: 
Informatics Forum. G.03
Speaker: 
Andreas Sochopoulos (Supervisor: Prof Sethu Vijayakumar)
University of Edinburgh

Diffusion and flow matching policies have recently shown remarkable
performance in robotic applications by accurately capturing multimodal robot
trajectory distributions. However, their computationally expensive inference, due
to iterative denoising or numerical integration of an ODE, limits their application as
real-time controllers for robots. We introduce a methodology that utilizes Optimal
Transport couplings between noise and samples, in order to force straight solutions

IPAB Workshop - Depth estimation with light field Cameras: What can we learn from insect eyes?

Date: 
Thu, 20/02/2025 - 13:00
Location: 
Informatics Forum G.03
Speaker: 
Cora Hummert
University of Edinburgh

Abstract: Fast and accurate depth estimation remains a major challenge in computer vision, particularly at long distances and in dynamic scenes.

Inspired by biological vision, we explore the similarities between insect compound eyes and light field cameras - both of which enable depth perception by capturing multiple viewpoints of a scene. Additionally, we investigate whether the light-induced motion of photoreceptors in the fly eye can be used to

IPAB Workshop - Non-prehensile Manipulation through RL

Date: 
Thu, 20/02/2025 - 13:00
Location: 
Informatics Forum G.03
Speaker: 
João Pousa De Moura
University of Edinburgh

Abstract: Manipulation without grasping, known as non-prehensile manipulation, is essential for dexterous robots in contact-rich environments, but presents many challenges relating with under-actuation, hybrid-dynamics, and frictional uncertainty. Additionally, object occlusions in a scenario of contact uncertainty, and where there is a relative motion between the object and the robot, becomes a critical problem. In this talk, I will present some of our recent work on non-prehensile manipulation through a model-free paradigm with reinforcement learning.

Together We Learn, Individually We Protect: A Vision for Differential Privacy

Date: 
Thu, 13/02/2025 - 15:00 to 16:00
Location: 
Heriot-Watt University, EMG.83
Speaker: 
Dr Han Wu
Southampton University

As AI systems advance at a mind-blowing pace, privacy and security risks remain critically overlooked, even in privacy-preserving frameworks. Federated Learning (FL) was originally designed to enhance privacy by keeping data decentralized, yet it remains vulnerable to inference attacks, which can extract sensitive information from shared model updates.

What kind of AI do we need for assistive autonomy?

Date: 
Thu, 13/02/2025 - 13:00 to 14:00
Location: 
Informatics Forum Room, G.03
Speaker: 
Professor Ram Ramamoorthy
University of Edinburgh

Abstract: Robotics is enjoying a surge in interest, in part driven by the rapidly emerging developments in AI in the past decade. Beyond the research interest, what are autonomous robots really good for? I believe that autonomy used in an assistive capacity is a good goal to aim for. Such a human-centred perspective raises some new questions for AI, and how it is used within robotics.