Verification of Neural Networks in Specialist Domains
As machine learning algorithms find their ways in safety-critical systems, such as autonomous cars, robot nurses, conversational agents, the question of ensuring their safety and security becomes important. At the same time, neural networks are known to be vulnerable to adversarial attacks --- a special kind of crafted inputs that cause unintended behaviour in trained neural networks. Due to these two factors, neural network verification has become a hot topic in both machine learning and verification communities. It is often described as one of the main challenges faced by computer science and engineering these days.
However, vast majority of existing approaches to neural network verification concern only a certain kind of algorithms – feedforward neural networks, and most commonly concern computer vision applications. However, different kinds of neural networks are popular in different application domains (e.g. conversational agents, natural language generation, system control), and also require robust verification methods. This project will study and propose novel methods of neural network verification in specialist domains, for example, in natural language processing and generation (that involves recurrent neural networks and transformers), and/or the reinforcement learning scenarios.
Neural Networks, Secure by Construction: An Exploration of Refinement Types. (joint with Wen Kokke, Daniel Kienitz, Robert Atkey, and David Aspinall)