Teaching Robots to Plan
The vast majority of applications of robots do not involve just one isolated task (such as grasping an object) but instead carefully choregraphed sequences of such tasks, with dependencies between tasks not just in terms of what comes after what but also how the previous task should be performed (in a quantitative sense at the level of motor control) in order to set up for the next. Moreover, there are numerous subjective variables in these tasks, e.g., how close should it come to a delicate object or how hard should it pull on a cable?
We will approach this work in a learning from demonstration setting, wherein the human user provides a sequence of multi-modal instructions, which the robot parses through a process of grounding and interpretation in a latent space. The data will include signals from joint angles, vision and natural language labels, with the expectation that there is an extended interaction.
Y. Hristov, D. Angelov, A.Lascarides, M. Burke, S. Ramamoorthy, Disentangled Relational Representations for Explaining and Learning from Demonstration, Conference on Robot Learning (CoRL), 2019.
Y. Hristov, A. Lascarides, S. Ramamoorthy, Interpretable latent spaces for learning from demonstration, Conference on Robot Learning (CoRL), 2018.
M. Burke, S. Penkov, S. Ramamoorthy, From explanation to synthesis: Compositional program induction for learning from demonstration, Robotics: Science and Systems (R:SS), 2019.
D. Angelov, Y. Hristov, S. Ramamoorthy, Using causal analysis to learn specifications from task demonstrations, In Proc. International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019.