University of Edinburgh

Exploiting Novel Representations for Constrained Multi-Robot Collaborative Loco Manipulation

Project number: 
123022
The principle goal of the project is to address multi-robot loco-manipulation tasks; both from a decentralised planning as well as predictive control perspective. The aim is to realise human-robot teaming for handling rigid and/or actuated object.
Prof. Sethu Vijayakumar FRSE
University of Edinburgh

The aim of this project is to enable a team of mobile manipulators with the capability of dexterous simultaneous manipulation and locomotion.

Game Theoretic Approaches to Advancing Physical Human-Robot Collaborations through Nudging

Project number: 
120025
Create a robotic system that can infer and influence human behaviour to assist humans in physical collaborative tasks. Develop a game theoretic framework to improve human-robot physical collaboration using implicit communication processes, such as intent estimation and nudging as well as explicit feedforward sensory cues. Embed this into an existing methodology of optimal control (OC) based hybrid trajectory optimisation (TO) for realising dyadic collaborative tasks using existing co-bot setup.
Prof. Sethu Vijayakumar FRSE
University of Edinburgh

Humans perform numerous tasks in their daily life that require collaboration with others. In these scenarios the two humans work together, anticipate how the other will behave and guide their partner towards the joint goal. For instance, two humans can effortlessly move and flip large boxes in a warehouse. Likewise, a human can pour wine into a glass held out by another human, without spilling.

Radio Frequency Simultaneous Localisation and Mapping (RF SLAM) for Robot Navigation

Project number: 
100021
To develop autonomous navigation system based on ubiquitous RF signals
Prof. Tughrul Arslan
University of Edinburgh

Robots and UAVs are often required to operate (semi)autonomously, especially when there is limited human awareness for safe operation. In such cases, self-navigation functionalities are required. This proposal relates to the adaptation of RF signals of opportunity for navigation using SLAM approaches.

Multi-Agent Reinforcement Learning

Project number: 
300009
Develop and evaluate algorithms for multi-agent reinforcement learning in complex environments
Dr. Stefano Albrecht
University of Edinburgh

Multi-agent reinforcement learning (MARL) uses reinforcement learning techniques to train a set of agents to solve a specified task. This includes agents working in a team to collaboratively accomplish tasks, as well as agents in competitive scenarios with conflicting interests. Recent advances in MARL have leveraged deep learning to scale to bigger problems and address some of the inherent challenges in MARL [1].

Novel visuo-tactile sensing mechanisms for soft tissue manipulation

Project number: 
134003
To explore novel sensing and neuromorphic computing mechanisms for detailed robotic characterisation of soft tissue properties, motivated by medical robotics use cases
Prof. Subramanian Ramamoorthy
University of Edinburgh

Manipulation of soft tissues is an important part of many application areas, ranging from surgical systems to food processing and other manufacturing operations. In the surgical setting, there is a pressing need for new small form factor solutions that enable detailed characterisation of the continuum mechanics properties of tissues – something that is hard to do with off the shelf sensing systems. Incorporating such sensors within robots that can perform active model-based sensing would represent a step change towards generalisable autonomy in these application areas. 

Hierarchical Deep Reinforcement Learning for Continuous Robot Control

Project number: 
134003
Learning complex sequences and coordination of physical interaction skills from multimodal data
Dr. Zhibin Li
University of Edinburgh

Hierarchical Reinforcement Learning (HRL) solves complex tasks at different levels of temporal abstraction using the knowledge in different trained experts. This project will investigate and research novel HRL algorithms and apply them to multiple robotic domains, ie the algorithms should be agnostic to different domains and robotic platforms/tasks.  

Learning to manipulate unknown objects

Project number: 
140029
To develop a generalisable understanding of strategies for robotic manipulation of unknown and incompletely perceived objects based on visuo-tactile cues
Prof. Subramanian Ramamoorthy
University of Edinburgh

There are numerous spectacular examples of biological organisms performing manipulation tasks under quite severe constraints. For instance, birds build nests using bits and pieces of objects they may know little about. Likewise insects move food around with only very partial ego-centric views of large objects. Can robots learn from this?

Teaching robots to behave

Project number: 
134002
To develop structured methods for specifying safe closed-loop robot behaviours and tools for automatically synthesizing and analysing system behaviour with respect to these specifications.
Prof. Subramanian Ramamoorthy
University of Edinburgh

The development of many commonly used engineering systems has depended crucially on an infrastructure of description languages at varying levels of abstraction, tools for simulation, verification, etc. This has certainly been the case for the majority of electronic devices we rely so heavily on.

Insect-inspired reaching for a flying robot

Project number: 
100020
To build a robot model of the proboscis positioning behaviour of hawkmoths.
Prof. Barbara Webb
University of Edinburgh

Visually guided reaching behaviour is important for robotics, and has been studied in humans, fixed-base robot arms and humanoid robots. As yet, autonomous flying vehicles are rarely equipped with appendages for reaching out to interact with objects in the world, and how reaching behaviour can be controlled from a robot in flight is a new field of study [1]. This project takes inspiration from the hawkmoth which can hover in front of a flower and use visual information to make precisely targetted movements to allow its long proboscis to contact the nectar [2].

Learning Transferable Representations of Object Pose from Images

Project number: 
240017
Learning how to extract the pose of unseen object categories from images
Dr. Oisin Mac Aodha
University of Edinburgh

For an autonomous robotic system to successfully, and safely, interact with the world around it it needs to be able to reason about the objects that it encounters not just as collections of pixels but as higher level semantic concepts. Furthermore, it must also determine the precise location and 3D spatial configuration of these objects relative to the robot. For example, it is vitally important that such a system can correctly identify any humans or animals that may be nearby and also infer their poses i.e. the spatial configuration of the bodyparts of the objects.