Verifying Robotic Behavior and Synthesizing Provably Correct Behavior

Study how to verify robotic behavior against temporal behavior (e.g., safety correctness) and/or synthesizing controllers for reactive missions operating in dynamical environments with unknown components
Description of the Project: 

How can we ensure that robots behave in a way as desired by humans? To ensure that the execution of complex actions leads to the desired behavior of the robot, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program.

Some aspects of these properties can only be learned at run time, whereas other properties may be specified by the modeler, but only partially. By leveraging ideas from hybrid systems, temporal logic, and programming languages, this project looks to advance the area of verifiable robotics. Conversely, we might be interested in synthesizing programs or policies that are “correct” for some desired specification. A partial list of references include:

  1. https://link.springer.com/article/10.1007/s10626-016-0232-7
  2. https://dl.acm.org/citation.cfm?id=3006810
  3. https://arxiv.org/abs/1801.08365
  4. https://arxiv.org/abs/1807.05527

Extensions to multi-agent settings are also possible.

Resources required: 
High-throughput computing for simulations
Project number: 
300006
First Supervisor: 
University: 
University of Edinburgh
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Strong programming skills; strong grasp of probability, statistics; ability to work independently; strong grasp of logic and verification