Accountable Conversational AI for Human-Robot Collaboration
Ambiguity & Vagueness are pervasive in human conversation; and their detection and resolution via clarificational dialogue is key to collaborative task success. State-of-the-art HRI cannot handle this, thus increasing risk of miscommunication in human-machine collaboration, especially in safety-critical environments.
This project will combine an existing linguistically informed model of conversation with machine learning to enable (visually) grounded Human-Machine collaboration including mechanisms for repair & clarification that enable the interactive detection & resolution of miscommunication. The project will depend crucially on semantic models that employ structured, transparent & modifiable representations.
The developed model will be deployed and evaluated on a suitable robotic platform (e.g. the Pepper, the iCub, or Tiago) at the Edinburgh Centre for Robotics.
- Arash Eshghi, Igor Shalyminov, and Oliver Lemon. Bootstrapping incremental dialogue systems from minimal data: linguistic knowledge or machine learning? In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
- Amanda Cercas Curry, Ioannis Papaioannou, Alessandro Suglia, Shubham Agarwal, Igor Shalyminov, Xinnuo Xu, Ondřej Dušek, Arash Eshghi, Ioannis Konstas, Verena Rieser and Oliver Lemon, "Alana v2: Entertaining and Informative Open-domain Social Dialogue using Ontologies and Entity Linking", Alexa Prize Proceedings, Amazon RE-INVENT, Las Vegas, 2018
- Yanchao Yu, Arash Eshghi, and Oliver Lemon, "Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings", Robo-NLP workshop at ACL 2017 ** BEST PAPER AWARD **
- Ioannis Papaioannou, Christian Dondrup, and Oliver Lemon, "Human-Robot Interaction Requires More Than Slot Filling - Multi-Threaded Dialogue for Collaborative Tasks and Social Conversation" AI-MHRI workshop 2018
- Yanchao Yu, Arash Eshghi and Oliver Lemon, "An Incremental Dialogue System for Learning Visually Grounded Word Meanings" (demonstration system), Proc. Dialogue and Perception, 2018
- Arash Eshghi, Christine Howes, Julian Hough, Eleni Gregoromichelaki, and Matthew Purver. Feedback in conversation as incremental semantic update. In Proceedings of the 11th International Conference on Computational Semantics (IWCS), 2