Learning to manipulate deformable objects
The scientific focus of this project will be on learning to manipulate deformable objects and soft/amorphous materials. Practical examples of such robotic tasks in domestic environments include manipulating clothes and handling food items in the kitchen - tasks requiring careful representational choices for infinite-dimensional states and dynamics. A key focus of our approach to this problem will be on learning models of object dynamics, and using such learned models to improve generalisation of policy synthesis. In particular, we would like to bring together innovations in the following areas, in novel ways:
(1) the use of physics simulation-based representations to generate the detailed object and object-environment interaction dynamics
(2) neural network architectures that use these representations for instance-specific fast system identification and physical parameter estimation
(3) use of these modules in policy learning loops, involving a combination of learned dynamic movement primitives and residual learning of scene specific correction policies
Ideally, this work will build on recent work in our group in the areas of residual learning from demonstration of insertion tasks , online system identification from video input , and efficient calibration of simulation representations to observed data [3,4]. We will also leverage insights from our work on inductive biases needed in neural network models to capture the manner of execution of forceful tasks such as wiping on a table .
Dyson is a leading global consumer product company, with its UK campus based near Bristol. Dyson is currently investing heavily in Robotics and this project would involve collaboration with an industrial co-supervisor within Dyson’s Robotics Research group.
 T. Davchev, K. S. Luck, M. Burke, F. Meier, S. Schaal, S. Ramamoorthy, Residual learning from demonstration: Adapting dynamic movement primitives for contact-rich insertion tasks. https://arxiv.org/abs/2008.07682, https://sites.google.com/view/rlfd
 M. Asenov, M. Burke, D. Angelov, T. Davchev, K. Subr, S. Ramamoorthy, Vid2Param: Modelling of dynamics parameters from video, IEEE Robotics and Automation Letters, Vol 5(2): 414-421, 2020. https://arxiv.org/abs/1907.06422
 T. López-Guevara, R. Pucci, N.K. Taylor, M.U. Gutmann, S. Ramamoorthy, K. Subr, Stir to pour: efficient calibration of liquid properties for pouring actions, In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020. http://rad.inf.ed.ac.uk/data/publications/2020/lopez-guevara2020stir.pdf
 T. López-Guevara, N.K. Taylor, M.U. Gutmann, S. Ramamoorthy, K. Subr, Adaptable pouring: Teaching robots not to spill using fast but approximate fluid simulation, Conference on Robot Learning (CoRL), 2017. http://proceedings.mlr.press/v78/lopez-guevara17a/lopez-guevara17a.pdf
 Y. Hristov, S. Ramamoorthy, Learning from demonstration with weakly supervised disentanglement, In Proc. International Conference on Learning Representations (ICLR), 2021. https://arxiv.org/abs/2006.09107, https://sites.google.com/view/weak-label-lfd