Robots Safe and Secure by Construction
Robotic applications spread to a variety of application domains, from autonomous cars and drones to domestic robots and personal devices. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. They must be realised as verifiable properties of software and hardware. Consider the following policy: a self-driving car must never break the highway code. A range of properties needs to be formulated and then verified to implement this policy: for example, the neural network responsible for street sign recognition must be verified to never misclassify the street signs; the neural network that monitors the road must recognise correctly pedestrians and obstacles; the “must never” requirement should be refined with probabilistic modelling to account for real-life scenarios, risks and liabilities. Each of these verifiable properties may turn out to be a difficult verification task on its own. The very features we value in neural networks (adaptivity, ability to generalise from noisy data) become sources of security threats. For example, neural networks are known to be vulnerable to adversarial attacks, i.e. specially crafted inputs that can create an unexpected and possibly dangerous output. Verification of neural networks is an active area of research, but the existing methods are far from being able to match desired policies in generality, levels of abstraction, and lifecycle.
This project will develop a novel ``lightweight” (or type-driven) methodology for verification of neural networks employed in Robotics. Starting with the application domain of the student’s choice, we will define necessary and sufficient verification properties for the application, and implement them as safety and security contracts that a neural network implementation must satisfy in order to be deemed sufficiently safe and secure.
1. E. Komendantskaya et al. Neural network verification for the masses. CoRR, abs/1907.01297, 2019.
2. J. Woodcock et al. Probabilistic Semantics for RoboChart - A Weakest Completion Approach. In UTP 2019, pp. 80–105. 2019.
3. G. Katz et al. The Marabou framework for verification and analysis of deep neural networks. In CAV 2019, Part I, vol. 11561 of LNCS, pp. 443–452. Springer, 2019.