EPSRC-Eligible and EU applicants

Below, we list PhD topics for EPSRC-Eligible and EU applicants only. For overseas applicants see here

To be EPSRC-Eligible for a full award, an applicant must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship (with some further constraint regarding residence for education).

For further details see EPSRC Student Eligibility guide or contact Anne Murphy. 

 

Analysis of Controlled Stochastic Sampling for training RL Agents for Robotics Tasks

Project number: 
123406
For tasks where path-planning of real robots is guided via a simulation of a virtual agent, this project aims to understand the role and impact of the randomisation scheme on the efficiency and generalisability of the agent.
Dr. Kartic Subr
University of Edinburgh

Data-driven machine learning techniques are popularly used in the field of robotics to inform autonomous decision-making and to perform control or path-planning. Supervised learning and reinforcement learning have been shown to be particularly amenable to canonical tasks that are integral to robotics applications. However, these techniques rely on data in the form of action-label (supervised), action-value (regression) or action-reward (RL) pairs, where the action is a path (or some other) execution by a real robot. e.g.

Subsea intervention using autonomous systems

Project number: 
124024
Develop control algorithms to enable safe semi -autonomous subsea manipulation with communications delay.
Prof. Yvan Petillot
Heriot-Watt University

Subsea inspection of structures is now commercial and the next frontier in subsea robotics is the safe physical interactions with underwater structures. This requires the development of new control algorithms with force compliance which can take into account external disturbances. There is also a need to work across a variety of control models, from full teleoperation across high bandwidth data to shared autonomy (our goal) across low and intermittent connection to enable shore-based control of remote platforms.

Internet of Robotic Things

Project number: 
300001
Investigating algorithms and applications for Internet of Things and Robotic systems
Dr. Mauro Dragone
Heriot-Watt University

The Internet-of-Robotic-Things (IoRT) brings together autonomous robotic systems with the Internet of Things (IoT) vision of sensors and smart objects pervasively embedded in everyday environments [1-4]. This merge can enable novel applications in almost every sector where cooperation between robots and IoT technology can be imagined. Early signs of this convergence are in network robot systems [5], robot ecologies [6], or in approaches such as cloud robotics [7].

Curiosity-driven Learning for Visual Understanding

Project number: 
400005
The goal of this project is to enable learning systems with the ability to have curiosity, based on their ability to already understand the world around them, to make learning faster and more efficient.
Dr. Laura Sevilla-Lara
University of Edinburgh

Curiosity guides humans to learn efficiently. It incentivizes us to spend more energy and time examining new, unexpected things, and to disregard those we fully understand already, to make our learning more efficient. Much of the vision learning that is done today is passive: learning systems are exposed to large amounts of training data, and learn from each sample multiple times, regardless of their current ability to recognize them at the time. This makes the process slow, specially given the increasingly large number of samples on datasets.

Perceiving Humans in Detail: Fine-grained Classification of Human Action Recognition

Project number: 
400004
The goal of this project is to push the current ability of robots to understand humans at a more sophisticated, fine-grained level.
Dr. Laura Sevilla-Lara
University of Edinburgh

Human action recognition is a fundamental problem that underlies many applications in robotics, including interaction, home care, collaboration, etc. The actions that can be recognized by robots or computers today are often coarse and simplistic, in the sense that they are very different from each other; for example eating vs playing piano, or sitting vs standing. Both datasets and technology tend to be broad and crude. As human-robot-interaction becomes more natural we require more sophisticated technology for perceiving humans.

Robot perception with minimal human supervision

Project number: 
400003
Learning to understand visual environment from robot cameras with minimal manual (or human) supervision
Dr. Hakan Bilen
University of Edinburgh

Vision is a key ability to humans as well as to robots to understand and extract information from real world and is crucial to safely interact with our environments. In contrast to human perception, state-of-the-art machine vision methods require millions of images and their manual labels to learn each visual task. This project focuses on designing machine learning and computer vision techniques that can help robots to learn multiple tasks from limited labelled data.

There are two main directions for potential projects:

Controllable neural text generation for safe human-machine interactions

Project number: 
400002
The goal of this research is to develop novel neural text generation models, which can guarantee semantic completeness and thus enable safe human-machine interactions.
Prof. Verena Rieser
Heriot-Watt University

Natural Language Generation (NLG) is the task of translating machine-readable representations and data into human language, and thus vital for accountability in safe human-machine collaboration. Neural Network architectures for NLG are promising since they able to capture linguistic knowledge through latent representations using raw input data, and hence have the benefit of simplifying the design of systems by avoiding costly manual engineering of features, with the potential of more easily scaling to new data and domains.

New number formats for faster deep learning

Project number: 
400001
Exploring the use of POSIT numbers for adaptive precision schemes in deep learning algorithms.
Prof. Sven-Bodo Scholz
Heriot-Watt University

The training phase in Deep Learning is very compute and data intensive and, therefore, the efficiency of the training phase typically restricts the quality of results that can be achieved within a given time frame.

Many learning algorithms are dominated by the speed in which data can be brought to the CPU, i/e., by the memory bandwidth of the executing hardware. Consequently, techniques that are based on reduced precision number representations have been shown to produce faster results without a significant loss in the quality of results.

Verifying Robotic Behavior and Synthesizing Provably Correct Behavior

Project number: 
300006
Study how to verify robotic behavior against temporal behavior (e.g., safety correctness) and/or synthesizing controllers for reactive missions operating in dynamical environments with unknown components
Dr. Vaishak Belle
University of Edinburgh

How can we ensure that robots behave in a way as desired by humans? To ensure that the execution of complex actions leads to the desired behavior of the robot, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program.

Genetic Programming for Control

Project number: 
300005
Complexity reduction of deep control algorithms by genetic programming for autonomous robotics applications.
Dr. J. Michael Herrmann
University of Edinburgh

The aim of this project is to develop a stringent approach to automatic programming of control systems based on an intermediate implicitly learned representation of the control task by a neural network. While deep neural networks (DNN) have been shown to be capable of solving control problems effectively based on learning from demonstration and reinforcement learning, the resulting representations are computational complex and lack immediate inspectability.