Verification of Neural Networks in Specialist Domains

Propose novel methods of neural network verification in specialist domains, such as natural language processing and generation, or in the reinforcement learning scenarios.
Description of the Project: 

As machine learning algorithms find their ways in safety-critical systems, such as autonomous cars, robot nurses,  conversational agents, the question of ensuring their safety and security becomes important.  At the same time, neural networks are known to be vulnerable to adversarial attacks --- a special kind of crafted inputs that cause unintended behaviour in trained neural networks.  Due to these two factors, neural network verification has become a hot topic in both machine learning and verification communities. It is often described as one of the main challenges faced by computer science and engineering these days.  

 However, vast majority of existing approaches to neural network verification concern only a certain kind of algorithms – feedforward neural networks, and most commonly concern computer vision applications.  However, different kinds of neural networks are popular in different application domains (e.g. conversational agents, natural language generation, system control), and also require robust verification methods. This project will study and propose novel methods of neural network verification in specialist domains, for example, in natural language processing and generation (that involves recurrent neural networks and transformers), and/or the reinforcement learning scenarios.

Resources required: 
Standard computing resources plus access to HPC in the Robotarium
Project number: 
First Supervisor: 
Heriot-Watt University
Second Supervisor(s): 
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Machine learning algorithms or verification algorithms.
Desirable skills and knowledge: 
Good programming skills.