EPSRC-Eligible and EU applicants

Below, we list PhD topics for EPSRC-Eligible and EU applicants only. For overseas applicants see here

To be EPSRC-Eligible for a full award, an applicant must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship (with some further constraint regarding residence for education).

For further details see EPSRC Student Eligibility guide or contact Anne Murphy. 


Vision based Mobile Autonomy through Object-Level Scene Understanding and Robust Visual SLAM

Project number: 
The goal of this project is to develop vision based algorithms for long-term mobile autonomy in dynamic environments, leveraging object-level scene understanding, multi-sensor fusion and visual SLAM.
Dr. Sen Wang
Heriot-Watt University

AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.

Safe Human-Robot Interaction for Offshore Inspection

Project number: 
The goal of this project is to enable robots to safely and effectively collaborate with humans in teams through grounded human-robot interaction for offshore inspection.
Prof. Helen Hastie

**Note: Project availability subject to collaboration agreement being signed**

This exciting PhD project is sponsored by Total where you will work on human-robot interaction with their latest robot for inspection of offshore oil and gas platforms.

Robust and Explainable Machine Learning for FinTech Applications

To develop and compare Gaussian Process models with Deep Neural Networks to provide explainable and quantifiable Machine Learning for FinTech applications.
Prof. Mike Chantler
Heriot-Watt University

Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex functions from vast datasets.  However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction, or classification, are normally not known.

Mapping High Level Parallel Code to Bespoke Hardware for Energy Efficient and Real Time Autonomous Devices and Smart Sensors.

Project number: 
The two key requirements for real time decision making in robotics systems and smart sensors, is: 1) increased compute power for full AI autonomy, and 2) energy efficiency, which is a a critical concern for long-lasting operation. The goal of this project is to develop novel language processing methodologies, to create low energy custom hardware accelerators with Field Programmable Gate Arrays (FPGAs) from algorithms written in SYCL. SYCL is a portable C++ standard for heterogeneous computing.
Dr. Robert Stewart
Heriot-Watt University

**Note: Project availability subject to collaboration agreement being signed**


Codeplay https://www.codeplay.com/ is a company in Edinburgh with industry expertise with compiler construction and processor architectures. They are a leading partner in the standardisation of SYCL, a programming abstraction for heterogeneous hardware. Codeplay have SYCL implementations for CPUs and GPUs.

The Codeplay CEO, Andrew Richards, founded the company in 2002. He chairs the working group for the SYCL standard within the Khronos Group.

Analysis of Controlled Stochastic Sampling for training RL Agents for Robotics Tasks

Project number: 
For tasks where path-planning of real robots is guided via a simulation of a virtual agent, this project aims to understand the role and impact of the randomisation scheme on the efficiency and generalisability of the agent.
Dr. Kartic Subr
University of Edinburgh

Data-driven machine learning techniques are popularly used in the field of robotics to inform autonomous decision-making and to perform control or path-planning. Supervised learning and reinforcement learning have been shown to be particularly amenable to canonical tasks that are integral to robotics applications. However, these techniques rely on data in the form of action-label (supervised), action-value (regression) or action-reward (RL) pairs, where the action is a path (or some other) execution by a real robot. e.g.

Subsea intervention using autonomous systems

Project number: 
Develop control algorithms to enable safe semi -autonomous subsea manipulation with communications delay.
Prof. Yvan Petillot
Heriot-Watt University

Subsea inspection of structures is now commercial and the next frontier in subsea robotics is the safe physical interactions with underwater structures. This requires the development of new control algorithms with force compliance which can take into account external disturbances. There is also a need to work across a variety of control models, from full teleoperation across high bandwidth data to shared autonomy (our goal) across low and intermittent connection to enable shore-based control of remote platforms.

Internet of Robotic Things

Project number: 
Investigating algorithms and applications for Internet of Things and Robotic systems
Dr. Mauro Dragone
Heriot-Watt University

The Internet-of-Robotic-Things (IoRT) brings together autonomous robotic systems with the Internet of Things (IoT) vision of sensors and smart objects pervasively embedded in everyday environments [1-4]. This merge can enable novel applications in almost every sector where cooperation between robots and IoT technology can be imagined. Early signs of this convergence are in network robot systems [5], robot ecologies [6], or in approaches such as cloud robotics [7].

Curiosity-driven Learning for Visual Understanding

Project number: 
The goal of this project is to enable learning systems with the ability to have curiosity, based on their ability to already understand the world around them, to make learning faster and more efficient.
Dr. Laura Sevilla-Lara
University of Edinburgh

Curiosity guides humans to learn efficiently. It incentivizes us to spend more energy and time examining new, unexpected things, and to disregard those we fully understand already, to make our learning more efficient. Much of the vision learning that is done today is passive: learning systems are exposed to large amounts of training data, and learn from each sample multiple times, regardless of their current ability to recognize them at the time. This makes the process slow, specially given the increasingly large number of samples on datasets.

Perceiving Humans in Detail: Fine-grained Classification of Human Action Recognition

Project number: 
The goal of this project is to push the current ability of robots to understand humans at a more sophisticated, fine-grained level.
Dr. Laura Sevilla-Lara
University of Edinburgh

Human action recognition is a fundamental problem that underlies many applications in robotics, including interaction, home care, collaboration, etc. The actions that can be recognized by robots or computers today are often coarse and simplistic, in the sense that they are very different from each other; for example eating vs playing piano, or sitting vs standing. Both datasets and technology tend to be broad and crude. As human-robot-interaction becomes more natural we require more sophisticated technology for perceiving humans.