Explaining and interpretable task planning

Study how to generate explanations for robotic behavior and how to unify the goals of the robot with Human intentions
Description of the Project: 

The issue of explanations for AI systems cooperating with humans has been a topic of considerable interest of late. But it is widely argued that current solutions that are based on local representations do not fully capture the reasoning behind the underlying decision. So, the idea here is taking a fresh approach to explainability by appealing to causality and abstraction, as well as model reconciliation and value alignment with the intentions of human users.  That is, the goal is to generate explanations that humans can understand and to accept suggestions from humans in a vocabulary that human users will be comfortable with. A partial list of references include:

  1. https://arxiv.org/pdf/1802.01013.pdf
  2. https://arxiv.org/abs/1801.08365
  3. https://arxiv.org/abs/1807.05527
  4. http://www.cs.toronto.edu/~hector/Papers/cogrob.pdf

Extensions to multi-agent settings are also possible.

Resources required: 
High-throughput computing for simulations
Project number: 
First Supervisor: 
University of Edinburgh
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Strong programming skills; strong grasp of probability, statistics; ability to work independently; strong grasp of logic and verification
Funding Available: