Verifying Robotic Behavior and Synthesizing Provably Correct Behavior
How can we ensure that robots behave in a way as desired by humans? To ensure that the execution of complex actions leads to the desired behavior of the robot, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program.
Some aspects of these properties can only be learned at run time, whereas other properties may be specified by the modeler, but only partially. By leveraging ideas from hybrid systems, temporal logic, and programming languages, this project looks to advance the area of verifiable robotics. Conversely, we might be interested in synthesizing programs or policies that are “correct” for some desired specification. A partial list of references include:
Extensions to multi-agent settings are also possible.