University of Edinburgh

Autonomous Driving in Urban Environments

Project number: 
100017
Develop and evaluate algorithms for autonomous driving in urban environments
Dr. Stefano Albrecht
University of Edinburgh

The coming decades will see the creation of fully autonomous vehicles (AVs) capable of driving without human intervention. Among the expected benefits of AVs are a significant reduction in traffic incidents, congestion, and pollution, while dramatically improving cost-efficiency.

University: 
University of Edinburgh

PhD position: Human-Robot Interaction (HRI) in Construction

The CyberBuild Lab at The University of Edinburgh invites applications for a PhD position working at the forefront of Construction innovation, performing research in the fields of Human-Robot Interaction and Mixed Reality technology. Tuition fees and stipend are available for Home/EU students (International students can apply, but the funding only covers the Home/EU fee rate).

University: 
University of Edinburgh

Analysis of Controlled Stochastic Sampling for training RL Agents for Robotics Tasks

Project number: 
123406
For tasks where path-planning of real robots is guided via a simulation of a virtual agent, this project aims to understand the role and impact of the randomisation scheme on the efficiency and generalisability of the agent.
Dr. Kartic Subr
University of Edinburgh

Data-driven machine learning techniques are popularly used in the field of robotics to inform autonomous decision-making and to perform control or path-planning. Supervised learning and reinforcement learning have been shown to be particularly amenable to canonical tasks that are integral to robotics applications. However, these techniques rely on data in the form of action-label (supervised), action-value (regression) or action-reward (RL) pairs, where the action is a path (or some other) execution by a real robot. e.g.

Curiosity-driven Learning for Visual Understanding

Project number: 
400005
The goal of this project is to enable learning systems with the ability to have curiosity, based on their ability to already understand the world around them, to make learning faster and more efficient.
Dr. Laura Sevilla-Lara
University of Edinburgh

Curiosity guides humans to learn efficiently. It incentivizes us to spend more energy and time examining new, unexpected things, and to disregard those we fully understand already, to make our learning more efficient. Much of the vision learning that is done today is passive: learning systems are exposed to large amounts of training data, and learn from each sample multiple times, regardless of their current ability to recognize them at the time. This makes the process slow, specially given the increasingly large number of samples on datasets.

Perceiving Humans in Detail: Fine-grained Classification of Human Action Recognition

Project number: 
400004
The goal of this project is to push the current ability of robots to understand humans at a more sophisticated, fine-grained level.
Dr. Laura Sevilla-Lara
University of Edinburgh

Human action recognition is a fundamental problem that underlies many applications in robotics, including interaction, home care, collaboration, etc. The actions that can be recognized by robots or computers today are often coarse and simplistic, in the sense that they are very different from each other; for example eating vs playing piano, or sitting vs standing. Both datasets and technology tend to be broad and crude. As human-robot-interaction becomes more natural we require more sophisticated technology for perceiving humans.

Robot perception with minimal human supervision

Project number: 
400003
Learning to understand visual environment from robot cameras with minimal manual (or human) supervision
Dr. Hakan Bilen
University of Edinburgh

Vision is a key ability to humans as well as to robots to understand and extract information from real world and is crucial to safely interact with our environments. In contrast to human perception, state-of-the-art machine vision methods require millions of images and their manual labels to learn each visual task. This project focuses on designing machine learning and computer vision techniques that can help robots to learn multiple tasks from limited labelled data.

There are two main directions for potential projects:

Verifying Robotic Behavior and Synthesizing Provably Correct Behavior

Project number: 
300006
Study how to verify robotic behavior against temporal behavior (e.g., safety correctness) and/or synthesizing controllers for reactive missions operating in dynamical environments with unknown components
Dr. Vaishak Belle
University of Edinburgh

How can we ensure that robots behave in a way as desired by humans? To ensure that the execution of complex actions leads to the desired behavior of the robot, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program.

Genetic Programming for Control

Project number: 
300005
Complexity reduction of deep control algorithms by genetic programming for autonomous robotics applications.
Dr. J. Michael Herrmann
University of Edinburgh

The aim of this project is to develop a stringent approach to automatic programming of control systems based on an intermediate implicitly learned representation of the control task by a neural network. While deep neural networks (DNN) have been shown to be capable of solving control problems effectively based on learning from demonstration and reinforcement learning, the resulting representations are computational complex and lack immediate inspectability.