Understanding referential communication through the lens of child-robot interaction

We would like to understand and devise efficient mechanisms for referential communication understanding in robots, through the lens of experiments with children’s referential communication to a robot
Description of the Project: 

Referential communication refers to how an agent communicates their intended meaning to another agent about a referent, or object. This is a crucial aspect of human-robot interaction, especially when the robot must collaborate with human users in order to help them perform a shared task.

We will study this through simple but realistic interactions in the form of children’s communication with robots, in the context of physical manipulation tasks such as building assemblies with toys. We will use a combination of environmental visual sensing and eye tracking the child to ground the communication for the robot. Our experiments will be inspired both by practically interesting aspects of robot manipulation tasks (e.g., interactive physical games) and the experimental methodology of developmental psychology.

Research questions will include:

How might the robot ‘ask’ for disambiguation when the communication is not specific/clear?

How does the child treat the robot (with assumed agency, theory of mind, as if a younger less experienced child, or as an older more experience adult)? If children are primed with prior knowledge about the robot, how will this influence their communication (e.g. the robots is ‘all knowing’, or the robot is ignorant and needs to be taught a new skill).

Resources required: 
PR2 robot; eye-tracking glasses, environmental visual sensing
Project number: 
200015
First Supervisor: 
University: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Robotics and machine learning at the level of our MSc programme, ROS and Python/C++ programming
Desirable skills and knowledge: 
Basic human experimental design, knowledge of natural language understanding
References: 

Penkov, S., Bordallo, A., & Ramamoorthy, S. (2017, June). Physical symbol grounding and instance learning through demonstration and eye tracking. In Robotics and Automation, 2017 IEEE International Conference on, Singapore.