Vision based 24/7 Navigation in Large-Scale Outdoor Environments
Autonomous mobile robots are coming. Self-driving cars, for example, will change the transportation significantly by using less energy for better safety and efficiency. However, most of the existing autonomous navigation systems rely on LiDAR sensors, which are too expensive for many applications. In contrast, we human can only use our vision to navigate freely and drive safely around complex environments.
This project investigates how to use low-cost cameras for long-range robot autonomy, aiming to achieve 24/7 autonomous navigation in large-scale outdoor environments by using vision. The main challenge of the project is to leverage vision based semantic scene understanding, mapping and localisation for robust obstacle avoidance, path planning and 24/7 robust place recognition. The student will study how to combine traditional probabilistic methods with machine/deep learning techniques for autonomous navigation. The developed algorithms and systems will be tested and evaluated in city-size outdoor dynamic environments for long-term.