Learning cross-modal models of surgical skill
There is significant interest in characterising surgical skills in the form of detailed models that correlate hand movements, applied forces and the context of the shape and texture of tissues that are actually being manipulated.
In this project, we would like to explore methods for learning such models obtained from data involving realistic tasks that can be performed with surgical training kits and other laboratory based mockups. It is expected that these models will be based on neural network architectures such as variational autoencoders that learn latent space description of the underlying skills from rich combinations of input modalities.
The usefulness of these models will be evaluated in terms of their ability to quantify the safety and effectiveness of observed movements, and also as constraints for the synthesis of similar robotic movements.
Lin, H. C., Shafran, I., Murphy, T. E., Okamura, A. M., Yuh, D. D., & Hager, G. D. (2005, October). Automatic detection and segmentation of robot-assisted surgical motions. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 802-810). Springer, Berlin, Heidelberg.
Reiley, C. E., Lin, H. C., Yuh, D. D., & Hager, G. D. (2011). Review of methods for objective surgical skill evaluation. Surgical endoscopy, 25(2), 356-366.
Y. Hristov, A. Lascarides, S. Ramamoorthy, Interpretable Latent Spaces for Learning from Demonstration, Conference on Robot Learning (CoRL), 2018.
Co-Reyes, J. D., Liu, Y., Gupta, A., Eysenbach, B., Abbeel, P., & Levine, S. (2018). Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings. arXiv preprint arXiv:1806.02813.