Safe Human-Robot Teaming through Visually Grounded Interaction

The goal of this project is to enable robots to safely and effectively collaborate with humans in teams through grounded human-robot interaction.
Description of the Project: 

For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the world and the task at hand. As robotic systems become more human-like and autonomous, this relationship with humans becomes more important; because once established, human-machine teams will need to be more efficient and more robust to be able to safely handle problematic situations, for example, when failure occurs, be it on the system or human side.

This PhD project investigates methods to allow a human operator to collaborate through incrementally building common ground through situated human-robot dialogue. For example, giving UAV instructions (e.g. “Inspect the fire on the right side of the well”). In order for the robot to safely carry out such instructions, the robot must infer the intent of the human within the situated context (“What do you mean by well? Which well? I see two wells”). The student will investigate theories and methods around human-robot interaction for goal/intent prediction for remote robots such as UAVs using images and video from the robot’s view (see figure), and jointly mapping natural language and visual observations to actions.  This is particularly challenging as the robots will be remote with restricted line-of-sight and intermittent communications. This will be done in simulation and with real UAVs.

Resources required: 
Robot for expressive interaction, e.g. the FurHat or Emys
Project number: 
240006
First Supervisor: 
University: 
Heriot-Watt University
Second Supervisor(s): 
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Machine learning
Desirable skills and knowledge: 
ROS, interest in dialogue and NLP
References: 

Chai, J. Y., She, L., Fang, R., Ottarson, S., Littley, C., Liu, C., & Hanson, K. (2014). Collaborative Effort towards Common Ground in Situated Human-Robot Dialogue. In Proc. of HRI, 2014.

Misra, D. K., Langford, J., & Artzi, Y. (2017). Mapping Instructions and Visual Observations to Actions with Reinforcement Learning. https://arxiv.org/pdf/1704.08795.pdf

Deits, R., Tellex, S., Thaker, P., Simeonov, D., Kollar, T., & Roy, N. (2012). Clarifying commands with information-theoretic human-robot dialog. Journal of Human-Robot Interaction1(1), 78-95.