In recent years, much of the research focus in semantic segmentation has been on pursuing better performance, optimising metrics such as accuracy and computational speed.
However, there has been comparatively little focus on enabling new capabilities from these image segmentations and from semantic level understanding, especially in the robotics domain.
This project seeks to investigate the effectiveness of learning a robot control policy from scene semantic information. Preliminary results are the production of several new deep neural models, which have been designed to learn an autonomous driving policy from scene semantic segmentation. The policies are validated in a self-driving car simulator under various conditions. Further, this project has investigated the generation of top-down representations - "bird's eye views" - of a scene from the perspective of a road vehicle using Generative Adversarial Networks. More recently, work is being undertaken in the area of omnidirectional depth estimation for autonomous vehicles.
I'm currently a third year PhD student supervised by Dr. Sen Wang at the Edinburgh Centre for Robotics. I hold a BEng (Hons) First Class in Robotics, Autonomous and Interactive Systems for a dissertation in biologically inspired swarm robotics. In addition to this, I hold an MScR with Distinction for a dissertation on semantic learning for self-driving cars. Both degrees were given by Heriot-Watt University. Prior to the CDT, I interned at a wearable sports tech startup, Sansible , focusing on prototype engineering. More recently, I interned at Robotical, an Edinburgh based educational robotics startup, and at the Cambridge office of MathWorks within the Deep Learning team.