Learning Dexterous Robotic Manipulation

Learning autonomous grasping and manipulation skills that are safe to be deployed in human environment with data-efficient deep reinforcement learning and human-robot skill transfer
Description of the Project: 

Background

A large variety of robotic applications strongly involve handling various objects as the core process for task completion. To date, most of these jobs are still performed by people. Although some are automated by robots, those solutions primarily rely on pre-designed rules or tele-operation (limited operational time due to cognitive overload), which unavoidably limits the performance in changing environments. This project consists of multiple challenging research topics in robotic manipulation.

Project description

The primary goal of the project is to solve the challenging complex problem of manipulating soft, deformable objects and advancing cutting-edge technologies in computer vision, and machine learning. Grasping or manipulating soft objects, eg fruits, clothes, textiles, is much more difficult since deformation occurs during interaction (cutting, crushing or folding). If accurate world models of the deformation are required for prediction, then the robot should also have cognitive capability of reasoning where, how and why to manipulate objects. Simulating soft or deformable objects accurately is also a challenge and therefore the sim-to-real transfer is harder than manipulation of rigid objects. 

Multimodal perception for manipulation tasks is another problem to solve, eg integrate vision and haptics feedback. Vision mainly guides where and how to manipulate, haptics mainly guides the adaption after contact based on the estimation of object physical properties. The project will study the use of realistic model uncertainties for improving the simulation, as well as improving the robustness of the learning algorithms against noises and non-measurable errors. Processing such high dimensional sensor input, eg encode multimodal feedback into the state-action descriptor of deep neural network, is also challenging. 

Another approach that the project will also investigate is the human-robot skill transfer. Learning from human demonstration enables the robot acquire more natural and effective motion strategy. Several methods can be adopted to provide demonstration, such as motion capture, tele-operation or through virtual reality devices. The data collection will involve the use of tele-operation interface to gather sufficient and successful manipulation data to train the deep neural network in a supervised manner, encoding multi-modal sensory feedback in the control policies. This will kickstart deep reinforcement learning and boost learning process in both simulation and real hardware. 

The outcome of the project will be a critical enabler for physical interaction that delivers the capability of autonomous grasping and manipulation, which cannot be handled by suction, and therefore showcase the potential translation to a much wider range of flexible and adaptive object-handling tasks in unstructured or extreme industrial environments.

Project number: 
124025
First Supervisor: 
University: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Linux, C++, Python, machine learning knowledge, ROS, tensorflow, experience with Physics engines/simulators
Desirable skills and knowledge: 
PyTorch, experience with Physics engines (ODE, Bullet, Mujoco, Physx3.4) and Physics simulators (Gazebo, Pybullet, VRep, Unity)
References: 
  1. Stephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell andKonstantinos Bousmalis . Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks. arXiv:1812.07252, 2018.
  2. Wang, Z., Bapst, V., Heess, N., Mnih, V., Munos, R., Kavukcuoglu, K. and de Freitas, N., 2016. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224.
  3. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G. and Petersen, S., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540), p.529.
  4. Chuanyu Yang, Kai Yuan, Wolfgang Xaver Merkt, Sethu Vijayakumar, Taku Komura, Zhibin Li, “Learning Whole-body Motor Skills for Humanoids,” in IEEE-RAS International Conference on Humanoid Robots, 2018.
  5. Paul Wohlhart, Vincen Lepetit, “Learning descriptors for object recognition and 3d pose estimation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015.