Heriot-Watt University

Robots Safe and Secure by Construction

Project number: 
400007
Verified implementation of machine-learning components of autonomous systems
Dr. Ekaterina Komendantskaya
Heriot-Watt University

Robotic applications spread to a variety of application domains, from autonomous cars and drones to domestic robots and  personal devices. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. They must be realised as verifiable properties of software and hardware. Consider the following policy: a self-driving car must never break the highway code.

Deep Analysis: A Critical Enabler to Certifying Robotic and Autonomous Systems

Project number: 
300007
Develop techniques that assist in certifying robotic and autonomous systems through a deep analysis at the level of requirements, problem worlds and specifications.
Prof. Andrew Ireland
Heriot-Watt University

Safety critical robotic and autonomous systems, such as Unmanned Air Vehicles (UAVs) that operate beyond visual line of sight, require the highest level of certification. Certifiers are concerned with how such systems behave within their environment – as defined by system wide requirements, e.g. compliance with the rules-of-the-air (i.e. SERA).   In contrast, software developer’s focus on specifications - how the system software should behave based upon operational modes and input signals. Many catastrophic system failures, e.g.

3D vision and robotic navigation using Event and Polarisation Cameras

Project number: 
123407
The project will explore the use of emerging imaging modalities such as even and polarisation cameras to perform 3D vision in very dynamic, complex and un-textured environment where classical approaches fail in general.
Prof. Yvan Petillot
Heriot-Watt University

Optical cameras have been very successfully used for 3D vision and robotic navigation in texture rich environments and good visibility conditions. However, they have strong limitations in more complex scenarios where the environment is either very dynamic or visibility is poor. In this thesis, you will explore new sensor modalities and how they can help solve these problems.

Multimodal fusion for large-scale 3D mapping

Project number: 
134001
The project will explore the combination of 3D point clouds with imaging modalities (colour, hyperspectral images) via machine learning and computer graphics to improve the characterization of complex 3D scenes.
Dr. Yoann Altmann
Heriot-Watt University

Lidar point clouds have been widely used to segment large 3D scenes such as urban areas and vegetated regions (forests, crops, …), and to build elevation profiles. However, efficient point cloud analysis in the presence of complex scenes and partially transparent objects (e.g, forest canopy) is still an unsolved challenge.

Shape-Programmable Soft Actuators

Project number: 
120019
The objective of this project is to design and develop soft actuators with programmable motion output.
Dr. Morteza Amjadi
Heriot-Watt University

Soft actuator materials are being actively pursued owing to their importance in soft robotics, artificial muscles, biomimetic devices, and beyond. Electrically-, chemically-, and light-activated actuators are mostly explored soft actuators. Recently, significant efforts have been made to reduce the driving voltage and temperature of thermoresponsive actuators, develop chemical actuators that can function in air, and enhance the energy efficiency of light-responsive actuators.

Wearable and Stretchable Stain/Tactile Sensors for Soft Robotic Applications

Project number: 
120018
To design and develop stretchable optomechanical sensors, and investigate their integration with soft gripper robots towards soft robots with feedback sensation
Dr. Morteza Amjadi
Heriot-Watt University

Wearable sensor technologies have recently attracted tremendous attention due to their potential applications in soft robotics, human motion detection, prosthetics, and personalized healthcare monitoring. Remarkable advances in materials science, nanotechnology, and biotechnology have led to the development of various wearable and stretchable sensors. For example, researchers including us have developed resistive and capacitive-type strain and pressure sensors and demonstrated their use in soft robotics, tactile sensing and perception, and human body motion detection.

Adaptive sensor fusion for resilient vehicle sensing

Project number: 
140028
To develop an integrated sensing system for the robotarium van to enable driver assistance and progress towards vehicle autonomy that is resilient to poor weather conditions, multi-sensor interference, adversarial attack and GSP denial.
Prof. Andrew Wallace
Heriot-Watt University

Automotive sensing must be robust or resilient. For example, optical sensors become rapidly ineffective in heavy rain or fog, and radar sensors provide low resolution data that is inadequate for scene mapping and object identification. Further, most autononous or semi-autonomous vehicle trials are conducted in sparse sensor environments, so that interference is rarely a problem, and assume pre-learnt road network data and continuous GPS availability

University: 
Heriot-Watt University

Rethinking Deep Learning on Remote Smart Sensors

Project number: 
140027
Develop new neural network compression mechanisms to accelerate neural networks on low powered FPGA and embedded GPU smart sensors
Dr. Robert Stewart
Heriot-Watt University

Neural networks for deep learning have been proven successful for many different domains, such as autonomous driving, conversational agents, autonomous robotics and computer vision. Neural network models are typically trained and executed on GPUs, but these have significant energy costs and lack portability needed for remote smart devices. FPGAs and embedded GPUs solve this problem, but cannot host large trained models. Thus, mechanisms to compress neural networks are needed to fit within hardware resource constraints without losing accuracy of AI inferences the model can make.

Grounded Visual Dialogue

Project number: 
240010
Model the dynamics of meaning and grounding in visual dialogue
Prof. Verena Rieser
Heriot-Watt University

This project will explore how to model dialogue phenomena in visual dialogue and how these phenomena contribute to task
success.

Visual dialog involves an agent to hold a meaningful conversation with a user in the context of an image. These systems find potential application in VR technology, dialog based image retrieval and agents which can provide viable information to visually-impaired people about an image or other visual content.