Accountable Conversational AI for Human-Robot Collaboration

To extend & develop models for (Visually) Grounded Human-Machine Collaboration in conversation, including mechanisms for detection & resolution of ambiguity & vagueness to mitigate risk of miscommunication.
Description of the Project: 

Ambiguity & Vagueness are pervasive in human  conversation; and their detection and resolution via clarificational dialogue is key to collaborative task success. State-of-the-art HRI cannot handle this, thus increasing risk of miscommunication in human-machine collaboration, especially in safety-critical environments.

This project will combine an existing linguistically informed model of conversation with machine learning to enable (visually) grounded Human-Machine collaboration including mechanisms for repair & clarification that enable the interactive detection & resolution of miscommunication. The project will depend crucially on semantic models that employ structured, transparent & modifiable representations.

The developed model will be deployed and evaluated on a suitable robotic platform (e.g. the Pepper, the iCub, or Tiago) at the Edinburgh Centre for Robotics.

The project will build on the Interaction Lab’s work on the EPSRC BABBLE project (Concluded recently), the H2020 Mummer project (ongoing), the Amazon Alexa Challenge, and the Orca Hub

Resources required: 
GPU machines for model training. Robotarium HRI lab for data collection and evaluation experiments. Robot (Pepper/Tiago or similar)
Project number: 
240008
First Supervisor: 
University: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Good Programming Skills, Background in AI, and some knowledge of machine learning.
Desirable skills and knowledge: 
Machine Learning, Natural Language Processing, Linguistics, Scientific Method (designing, running & analysing experiments)
References: 
  1. Arash Eshghi, Igor Shalyminov, and Oliver Lemon. Bootstrapping incremental dialogue systems from minimal data: linguistic knowledge or machine learning? In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
  2. Amanda Cercas Curry, Ioannis Papaioannou, Alessandro Suglia, Shubham Agarwal, Igor Shalyminov, Xinnuo Xu, Ondřej Dušek, Arash Eshghi, Ioannis Konstas, Verena Rieser and Oliver Lemon, "Alana v2: Entertaining and Informative Open-domain Social Dialogue using Ontologies and Entity Linking", Alexa Prize Proceedings, Amazon RE-INVENT, Las Vegas, 2018
  3. Yanchao Yu, Arash Eshghi, and Oliver Lemon, "Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings", Robo-NLP workshop at ACL 2017 ** BEST PAPER AWARD **
  4. Ioannis Papaioannou, Christian Dondrup, and Oliver Lemon, "Human-Robot Interaction Requires More Than Slot Filling -  Multi-Threaded Dialogue for Collaborative Tasks and Social Conversation" AI-MHRI workshop 2018 
  5. Yanchao Yu, Arash Eshghi and Oliver Lemon, "An Incremental Dialogue System for Learning Visually Grounded Word Meanings" (demonstration system), Proc. Dialogue and Perception, 2018
  6. Arash Eshghi, Christine Howes, Julian Hough, Eleni Gregoromichelaki, and Matthew Purver. Feedback in conversation as incremental semantic update. In Proceedings of the 11th International Conference on Computational Semantics (IWCS), 2