Deep Analysis: A Critical Enabler to Certifying Robotic and Autonomous Systems
Safety critical robotic and autonomous systems, such as Unmanned Air Vehicles (UAVs) that operate beyond visual line of sight, require the highest level of certification. Certifiers are concerned with how such systems behave within their environment – as defined by system wide requirements, e.g. compliance with the rules-of-the-air (i.e. SERA). In contrast, software developer’s focus on specifications - how the system software should behave based upon operational modes and input signals. Many catastrophic system failures, e.g. NASA Mars Polar Lander, can be attributed to inconsistencies and omissions at the level of requirements and specifications. The goal of this project is to develop automated techniques that detect such issues as early as possible and provide human oriented guidance as to how they can be addressed. These techniques will represent a critical enabler to certifying robotic and autonomous systems.
The notion of a problem world will play a key role in the project, i.e. an abstraction of the world in which the intended system will operate. A problem world may include a broad range of domains, e.g. digital, mechanical, physical and human. Domains are defined in terms of assumptions and properties. Certification at this level involves verifying that a specification is correct with respect to both the requirements and the associated problem world. This will require a level of analysis that is deeper than is provided by conventional software engineering approaches. Such deep analysis requires precision in terms of both representation and reasoning. Formal methods provide such precision via formal guarantees (proofs) and counter-examples (defects). What we are proposing, however, goes beyond mainstream formal methods. We are interested in turning counter-examples into guidance as to how defects may be addressed via a reformulation of specifications, problem worlds and requirements, e.g. introducing fault tolerance into a given problem world. In addition, given a formal representation of a problem world, we also aim to mechanize a what-if style of analysis, i.e. using generic patterns of system failures to challenge the assumptions and decisions of the developer. Clearly, the “developer” must remain in charge. What we are proposing are techniques that will broaden the developer’s creativity and productivity, while underpinning their decision making with formal guarantees. To achieve this, the techniques will need to translate between the formal representations and a notation that is understandable by both developers and certifiers. Currently problem worlds are handcrafted in collaboration with domain experts - so are costly to develop and maintain. Therefore, another potential avenue to explore will be to what extent can the creation, reuse and maintenance of problem worlds be mechanized.
The project will build upon existing tools and techniques. Firstly, D-RisQ will provide support for their software requirements tool, i.e. Kapture, along with their expertise in certifying safety critical autonomous systems. Secondly, we will build upon a range of reasoning techniques and ideas developed at Heriot-Watt and Edinburgh universities - techniques that combine modelling and reasoning knowledge as well as techniques that use failure analysis to automate representational change and the discovery of missing knowledge.
- D-RisQ Kapture Case Study: UAV Decision Making System – available via the D-RisQ website: http://www.drisq.com/
- ABC Repair System for Datalog-like Theories: https://www.research.ed.ac.uk/portal/en/publications/abc-repair-system-for-dataloglike-theories(c4118898-7705-408d-98e9-726f0404edf1).html
- Reasoned modelling critics: Turning failed proofs into modelling guidance: https://www.sciencedirect.com/science/article/pii/S016764231100092X
- Discovering Invariant through Automated Theory Formation: https://link.springer.com/article/10.1007/s00165-012-0264-1