Vision based 24/7 Navigation in Large-Scale Outdoor Environments

The goal of this project is to achieve 24/7 autonomous navigation in large-scale outdoor environments using vision.
Description of the Project: 

Autonomous mobile robots are coming. Self-driving cars, for example, will change the transportation significantly by using less energy for better safety and efficiency. However, most of the existing autonomous navigation systems rely on LiDAR sensors, which are too expensive for many applications. In contrast, we human can only use our vision to navigate freely and drive safely around complex environments.  

This project investigates how to use low-cost cameras for long-range robot autonomy, aiming to achieve 24/7 autonomous navigation in large-scale outdoor environments by using vision. The main challenge of the project is to leverage vision based semantic scene understanding, mapping and localisation for robust obstacle avoidance, path planning and 24/7 robust place recognition. The student will study how to combine traditional probabilistic methods with machine/deep learning techniques for autonomous navigation. The developed algorithms and systems will be tested and evaluated in city-size outdoor dynamic environments for long-term.

Resources required: 
A mobile robot equipped with a stereo camera and on-board computer
Project number: 
First Supervisor: 
Heriot-Watt University
Second Supervisor(s): 
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Programming skills (C/C++, Python), Linux
Desirable skills and knowledge: 
Experience on ROS and OpenCV; Knowledge of sensor fusion, SLAM and/or deep learning
Funding Available: