Learning cross-modal models of surgical skill

We would like to develop models of surgical skill, based on rich cross-modal data obtained from surgical training kits and other laboratory based mockups, for the purposes of quantifying these skills as well as enabling synthesis of similar behaviours in robots
Description of the Project: 

There is significant interest in characterising surgical skills in the form of detailed models that correlate hand movements, applied forces and the context of the shape and texture of tissues that are actually being manipulated.

In this project, we would like to explore methods for learning such models obtained from data involving realistic tasks that can be performed with surgical training kits and other laboratory based mockups. It is expected that these models will be based on neural network architectures such as variational autoencoders that learn latent space description of the underlying skills from rich combinations of input modalities.

The usefulness of these models will be evaluated in terms of their ability to quantify the safety and effectiveness of observed movements, and also as constraints for the synthesis of similar robotic movements.

Resources required: 
Surgical training box, robots and environmental sensors (
Project number: 
First Supervisor: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Robotics and machine learning at the level of our MSc programme, ROS and Python/C++ programming
Desirable skills and knowledge: 
Practical experience with robotics hardware and data collection

Lin, H. C., Shafran, I., Murphy, T. E., Okamura, A. M., Yuh, D. D., & Hager, G. D. (2005, October). Automatic detection and segmentation of robot-assisted surgical motions. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 802-810). Springer, Berlin, Heidelberg.

Reiley, C. E., Lin, H. C., Yuh, D. D., & Hager, G. D. (2011). Review of methods for objective surgical skill evaluation. Surgical endoscopy25(2), 356-366.

Y. Hristov, A. Lascarides, S. Ramamoorthy, Interpretable Latent Spaces for Learning from Demonstration, Conference on Robot Learning (CoRL), 2018.

Co-Reyes, J. D., Liu, Y., Gupta, A., Eysenbach, B., Abbeel, P., & Levine, S. (2018). Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings. arXiv preprint arXiv:1806.02813.