Exploiting Novel Representations for Constrained Multi-Robot Collaborative Loco Manipulation
The aim of this project is to enable a team of mobile manipulators with the capability of dexterous simultaneous manipulation and locomotion.
The aim of this project is to enable a team of mobile manipulators with the capability of dexterous simultaneous manipulation and locomotion.
Humans perform numerous tasks in their daily life that require collaboration with others. In these scenarios the two humans work together, anticipate how the other will behave and guide their partner towards the joint goal. For instance, two humans can effortlessly move and flip large boxes in a warehouse. Likewise, a human can pour wine into a glass held out by another human, without spilling.
Robots and UAVs are often required to operate (semi)autonomously, especially when there is limited human awareness for safe operation. In such cases, self-navigation functionalities are required. This proposal relates to the adaptation of RF signals of opportunity for navigation using SLAM approaches.
Multi-agent reinforcement learning (MARL) uses reinforcement learning techniques to train a set of agents to solve a specified task. This includes agents working in a team to collaboratively accomplish tasks, as well as agents in competitive scenarios with conflicting interests. Recent advances in MARL have leveraged deep learning to scale to bigger problems and address some of the inherent challenges in MARL [1].
Manipulation of soft tissues is an important part of many application areas, ranging from surgical systems to food processing and other manufacturing operations. In the surgical setting, there is a pressing need for new small form factor solutions that enable detailed characterisation of the continuum mechanics properties of tissues – something that is hard to do with off the shelf sensing systems. Incorporating such sensors within robots that can perform active model-based sensing would represent a step change towards generalisable autonomy in these application areas.
Hierarchical Reinforcement Learning (HRL) solves complex tasks at different levels of temporal abstraction using the knowledge in different trained experts. This project will investigate and research novel HRL algorithms and apply them to multiple robotic domains, ie the algorithms should be agnostic to different domains and robotic platforms/tasks.
There are numerous spectacular examples of biological organisms performing manipulation tasks under quite severe constraints. For instance, birds build nests using bits and pieces of objects they may know little about. Likewise insects move food around with only very partial ego-centric views of large objects. Can robots learn from this?
The development of many commonly used engineering systems has depended crucially on an infrastructure of description languages at varying levels of abstraction, tools for simulation, verification, etc. This has certainly been the case for the majority of electronic devices we rely so heavily on.
Visually guided reaching behaviour is important for robotics, and has been studied in humans, fixed-base robot arms and humanoid robots. As yet, autonomous flying vehicles are rarely equipped with appendages for reaching out to interact with objects in the world, and how reaching behaviour can be controlled from a robot in flight is a new field of study [1]. This project takes inspiration from the hawkmoth which can hover in front of a flower and use visual information to make precisely targetted movements to allow its long proboscis to contact the nectar [2].
For an autonomous robotic system to successfully, and safely, interact with the world around it it needs to be able to reason about the objects that it encounters not just as collections of pixels but as higher level semantic concepts. Furthermore, it must also determine the precise location and 3D spatial configuration of these objects relative to the robot. For example, it is vitally important that such a system can correctly identify any humans or animals that may be nearby and also infer their poses i.e. the spatial configuration of the bodyparts of the objects.