EPSRC-Eligible and EU applicants

Below, we list PhD topics for EPSRC-Eligible and EU applicants only. For overseas applicants see here

To be EPSRC-Eligible for a full award, an applicant must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship (with some further constraint regarding residence for education).

For further details see EPSRC Student Eligibility guide or contact Anne Murphy. 

 

Teaching Robots to Plan

Project number: 
124026
To develop and implement methods for instructing robots directly through natural language, where the instructions refer to temporally extended plans executed on physical robots (e.g., object manipulation)
Dr. Subramanian Ramamoorthy
University of Edinburgh

The vast majority of applications of robots do not involve just one isolated task (such as grasping an object) but instead carefully choregraphed sequences of such tasks, with dependencies between tasks not just in terms of what comes after what but also how the previous task should be performed (in a quantitative sense at the level of motor control) in order to set up for the next. Moreover, there are numerous subjective variables in these tasks, e.g., how close should it come to a delicate object or how hard should it pull on a cable?

Robots Safe and Secure by Construction

Project number: 
400007
Verified implementation of machine-learning components of autonomous systems
Dr. Ekaterina Komendantskaya
Heriot-Watt University

Robotic applications spread to a variety of application domains, from autonomous cars and drones to domestic robots and  personal devices. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. They must be realised as verifiable properties of software and hardware. Consider the following policy: a self-driving car must never break the highway code.

Learning Dexterous Robotic Manipulation

Project number: 
124025
Learning autonomous grasping and manipulation skills that are safe to be deployed in human environment with data-efficient deep reinforcement learning and human-robot skill transfer
Dr. Zhibin Li
University of Edinburgh

Background

A large variety of robotic applications strongly involve handling various objects as the core process for task completion. To date, most of these jobs are still performed by people. Although some are automated by robots, those solutions primarily rely on pre-designed rules or tele-operation (limited operational time due to cognitive overload), which unavoidably limits the performance in changing environments. This project consists of multiple challenging research topics in robotic manipulation.

Project description

Deep Analysis: A Critical Enabler to Certifying Robotic and Autonomous Systems

Project number: 
300007
Develop techniques that assist in certifying robotic and autonomous systems through a deep analysis at the level of requirements, problem worlds and specifications.
Prof. Andrew Ireland
Heriot-Watt University

Safety critical robotic and autonomous systems, such as Unmanned Air Vehicles (UAVs) that operate beyond visual line of sight, require the highest level of certification. Certifiers are concerned with how such systems behave within their environment – as defined by system wide requirements, e.g. compliance with the rules-of-the-air (i.e. SERA).   In contrast, software developer’s focus on specifications - how the system software should behave based upon operational modes and input signals. Many catastrophic system failures, e.g.

3D vision and robotic navigation using Event and Polarisation Cameras

Project number: 
123407
The project will explore the use of emerging imaging modalities such as even and polarisation cameras to perform 3D vision in very dynamic, complex and un-textured environment where classical approaches fail in general.
Prof. Yvan Petillot
Heriot-Watt University

Optical cameras have been very successfully used for 3D vision and robotic navigation in texture rich environments and good visibility conditions. However, they have strong limitations in more complex scenarios where the environment is either very dynamic or visibility is poor. In this thesis, you will explore new sensor modalities and how they can help solve these problems.

Multimodal fusion for large-scale 3D mapping

Project number: 
134001
The project will explore the combination of 3D point clouds with imaging modalities (colour, hyperspectral images) via machine learning and computer graphics to improve the characterization of complex 3D scenes.
Dr. Yoann Altmann
Heriot-Watt University

Lidar point clouds have been widely used to segment large 3D scenes such as urban areas and vegetated regions (forests, crops, …), and to build elevation profiles. However, efficient point cloud analysis in the presence of complex scenes and partially transparent objects (e.g, forest canopy) is still an unsolved challenge.

Shape-Programmable Soft Actuators

Project number: 
120019
The objective of this project is to design and develop soft actuators with programmable motion output.
Dr. Morteza Amjadi
Heriot-Watt University

Soft actuator materials are being actively pursued owing to their importance in soft robotics, artificial muscles, biomimetic devices, and beyond. Electrically-, chemically-, and light-activated actuators are mostly explored soft actuators. Recently, significant efforts have been made to reduce the driving voltage and temperature of thermoresponsive actuators, develop chemical actuators that can function in air, and enhance the energy efficiency of light-responsive actuators.

Wearable and Stretchable Stain/Tactile Sensors for Soft Robotic Applications

Project number: 
120018
To design and develop stretchable optomechanical sensors, and investigate their integration with soft gripper robots towards soft robots with feedback sensation
Dr. Morteza Amjadi
Heriot-Watt University

Wearable sensor technologies have recently attracted tremendous attention due to their potential applications in soft robotics, human motion detection, prosthetics, and personalized healthcare monitoring. Remarkable advances in materials science, nanotechnology, and biotechnology have led to the development of various wearable and stretchable sensors. For example, researchers including us have developed resistive and capacitive-type strain and pressure sensors and demonstrated their use in soft robotics, tactile sensing and perception, and human body motion detection.

Adaptive sensor fusion for resilient vehicle sensing

Project number: 
140028
To develop an integrated sensing system for the robotarium van to enable driver assistance and progress towards vehicle autonomy that is resilient to poor weather conditions, multi-sensor interference, adversarial attack and GSP denial.
Prof. Andrew Wallace
Heriot-Watt University

Automotive sensing must be robust or resilient. For example, optical sensors become rapidly ineffective in heavy rain or fog, and radar sensors provide low resolution data that is inadequate for scene mapping and object identification. Further, most autononous or semi-autonomous vehicle trials are conducted in sparse sensor environments, so that interference is rarely a problem, and assume pre-learnt road network data and continuous GPS availability