Visual Teach and Repeat (VTR) is a powerful technique achieving autonomous navigation of robots [1,2].
However, the existing methods of VTR are all traditional model based, i.e., explicitly extracting and saving the features of the environment without any learning capability. This means a robot cannot benefit from a large number of visual experiences or better understand the environment as it works over time.
This project aims to develop a novel VTR paradigm enabling a quadrotor to fly autonomously and robustly along a previously taught path by using a camera. It will leverage Deep Learning based techniques to persistently learn from the visual experiences.
 Paton M, MacTavish K A, Warren M, and Barfoot T D. “Bridging the Appearance Gap: Multi-Experience Localization for Long-Term Visual Teach and Repeat”. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9-14 October 2016
I'm currently a first year PhD student supervised by Dr. Sen Wang at the Edinburgh Centre for Robotics. I hold a BEng in Robotics, Autonomous and Interactive Systems from Heriot-Watt University, graduating in 2017 with First Class honours and a dissertation in biologically inspired swarm robotics.
Prior to the CDT, I interned at a wearable sports tech startup, Sansible , focusing on prototype engineering. More recently, I interned at Robotical, an Edinburgh based educational robotics startup, working on projects further developing Marty the Robot.