Vision based Mobile Autonomy through Object-Level Scene Understanding and Robust Visual SLAM

The goal of this project is to develop vision based algorithms for long-term mobile autonomy in dynamic environments, leveraging object-level scene understanding, multi-sensor fusion and visual SLAM.
Description of the Project: 

AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.

The capabilities of mapping, localisation and navigation are fundamental for mobile robots to operate autonomously in our dynamic world. Traditional methods usually rely on laser scanners or LiDAR to observe the world, detect obstacles and then avoid them. However, the high cost of such sensors prevents wide usage in consumer electronics products. In contrast, low cost consumer-grade cameras can also provide rich information of the environment that can be exploited by a mobile robot to achieve long-term autonomy. However, it brings in extra challenges, e.g., visual appearance changes under different illuminations. One potential solution is to incorporate scene understanding into mapping, localisation and navigation, making the robots moving smarter and safer.

AnKobot has been developing visual SLAM solutions combining a variety of visual cues, including lines, planes and deep-learning based semantic features. The solutions give robust performance in a wide range of indoor environments, including those with low textures, very low or very strong light, as well as fast changing light conditions.

In collaboration with AnKobot, this project will focus on vision based mobile autonomy for long-term applications by using object-level visual perception. The candidate will work on vision based techniques to perform 3D scene reconstruction augmented with semantic information, and enable a robot to see, interpret and understand the surroundings before planning its motion for autonomous navigation. Robust visual SLAM incorporating multi-sensor data and semantic information is also one of the research areas.

Project number: 
600005
First Supervisor: 
University: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Good programming skills (C++, Python, Linux), a familiarity in computer vision, machine learning and/or probabilistic robotics.
Desirable skills and knowledge: 
Experience on ROS, OpenCV and Tensorflow/PyTorch; Knowledge of sensor fusion, SLAM and/or deep learning.
References: 

- Yu Liu, Yvan Petillot, David Lane, and Sen Wang. Global localization with object-level semantics and topology. In IEEE International Conference on Robotics and Automation (ICRA), 2019.

- McCormac, John, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks.in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017.

Industry placement details: 
AnKobotis headquartered in Shanghai China, with offices in Shenzhen, Silicon Valley, Singapore and Israel. The successful candidate will have the opportunity to collaborate with AnKobot’s R&D team on system integration and deployment, seeing his/her research being widely used on one of the most exciting consumer-grade robot products in the world. AnKobot is a young, dynamic start-up focusing on AI and robotics. It is headquartered in Shanghai China, with offices in Shenzhen, Silicon Valley, Singapore and Israel. It is one of the first technology companies that launch visual SLAM into the market of consumer-grade robots. It achieves commercial large-scale production of this technology in 2017. Ankobot’s visual SLAM module has been used in many types of robotic vacuum cleaner since then. One of their products, Amarey robot vacuum cleaner, has been Amazon's Choice for robotic vacuums. Ankobot has received several rounds of funding from prestigious international investors and publicly traded companies, including Fosun Group, Hi-P international, etc. AnKobot develops Innovative technology and products for Home AI, Robotics, Smart Home, IoT, Cloud Service, deploying deep learning, visual SLAM, embedded AI, Computer Vision, artificial intelligence and other cutting-edge technologies. It creates new sensors to build “Robot Vision System” and “Robot Brain” to enable things “Sense”, “Think” and “Act” with CE Parts. The long term technological goal is to create a complete visual system, powered by AI, and made up of hardware and software, simulating the functions of the human AI. The current R&D focuses on developing home robots that can learn continuously from their surrounding environments using small amount of data, based on computational principles of the human brain.