Vision based Mobile Autonomy through Object-Level Scene Understanding and Robust Visual SLAM
AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.
The capabilities of mapping, localisation and navigation are fundamental for mobile robots to operate autonomously in our dynamic world. Traditional methods usually rely on laser scanners or LiDAR to observe the world, detect obstacles and then avoid them. However, the high cost of such sensors prevents wide usage in consumer electronics products. In contrast, low cost consumer-grade cameras can also provide rich information of the environment that can be exploited by a mobile robot to achieve long-term autonomy. However, it brings in extra challenges, e.g., visual appearance changes under different illuminations. One potential solution is to incorporate scene understanding into mapping, localisation and navigation, making the robots moving smarter and safer.
AnKobot has been developing visual SLAM solutions combining a variety of visual cues, including lines, planes and deep-learning based semantic features. The solutions give robust performance in a wide range of indoor environments, including those with low textures, very low or very strong light, as well as fast changing light conditions.
In collaboration with AnKobot, this project will focus on vision based mobile autonomy for long-term applications by using object-level visual perception. The candidate will work on vision based techniques to perform 3D scene reconstruction augmented with semantic information, and enable a robot to see, interpret and understand the surroundings before planning its motion for autonomous navigation. Robust visual SLAM incorporating multi-sensor data and semantic information is also one of the research areas.
- Yu Liu, Yvan Petillot, David Lane, and Sen Wang. Global localization with object-level semantics and topology. In IEEE International Conference on Robotics and Automation (ICRA), 2019.
- McCormac, John, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks.in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017.