Teaching Robots to Plan

To develop and implement methods for instructing robots directly through natural language, where the instructions refer to temporally extended plans executed on physical robots (e.g., object manipulation)
Description of the Project: 

The vast majority of applications of robots do not involve just one isolated task (such as grasping an object) but instead carefully choregraphed sequences of such tasks, with dependencies between tasks not just in terms of what comes after what but also how the previous task should be performed (in a quantitative sense at the level of motor control) in order to set up for the next. Moreover, there are numerous subjective variables in these tasks, e.g., how close should it come to a delicate object or how hard should it pull on a cable?

We will approach this work in a learning from demonstration setting, wherein the human user provides a sequence of multi-modal instructions, which the robot parses through a process of grounding and interpretation in a latent space. The data will include signals from joint angles, vision and natural language labels, with the expectation that there is an extended interaction.

Resources required: 
Project number: 
124026
First Supervisor: 
University: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Robotics at the level of R:SS, Machine learning (MLP and MLPR), Strong programming skills (NN frameworks, as well as ROS and robot systems programming)
Desirable skills and knowledge: 
Understanding of NLP models.
References: 

Y. Hristov, D. Angelov, A.Lascarides, M. Burke, S. Ramamoorthy, Disentangled Relational Representations for Explaining and Learning from DemonstrationConference on Robot Learning (CoRL), 2019.

Y. Hristov, A. Lascarides, S. Ramamoorthy, Interpretable latent spaces for learning from demonstrationConference on Robot Learning (CoRL), 2018.

M. Burke, S. Penkov, S. Ramamoorthy, From explanation to synthesis: Compositional program induction for learning from demonstrationRobotics: Science and Systems (R:SS), 2019.

D. Angelov, Y. Hristov, S. Ramamoorthy, Using causal analysis to learn specifications from task demonstrations, In Proc. International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019.