Autonomous robots for last-mile delivery

The mobile robot should be able to autonomously deliver parcels, groceries and food at door in a mockup residential environment, and to achieve tool-use, handling various objects, and collision-free navigation using elevators and doors.
Description of the Project: 

Background

The “last mile” delivery at the door for the customer is a required competency in nowadays consumer marketplace. The last mile fulfillment is essential for customer experience, and therefore more giant co-operations are competing to provide the local, rapid delivery services to meet that expectation. Major enterprises, such as Amazon, Walmart, Alibaba, UPS, FedEx, and so on, are all on their way of deploying their own strategic plans. With no doubt, all established service providers are competing to gain market share in this rapidly growing, $2 trillion e-commerce market [1].

Project description

This multi-disciplinary project will cover vision-based navigation, robot grasping, visual servoing, and so on. The application domain in a daily living environment requires particular consideration of human factors in the navigation element, such as prediction of human intention/behaviors, avoid collision with people in the scene, in order to guarantee safe interaction in human surrounded environment. Also, the core process of handling various objects, eg parcels, groceries, demand the soft and compliant properties of the robot hardware as well as the grasping control during object handling. All this imposes research challenges of safe physical interaction among the robot, people, targeted objects, and the environment. 

Apart from these service oriented capabilities required in the research development, a major part of the project is to develop novel generic, robust, and scene-agnostic visual servoing using deep neural networks (CNN), which is fundamental in precise picking and place of objects, operating instrument and facilities (elevators, doors, storage units). This will lay an essential foundation for enabling a broader range of physical interactions wherever vision-based positioning is demanded. Initial research will start with pre-trained CNNs with feature extraction and then train the last layer as a new regression layer such that the final output of the CNN can be used to produce policies for reaching and positioning. The next stage will create synthetic learning/training datasets for fine-tuning the neural network, such that it will be robust to perturbations on illumination, image errors, and occlusions.

In addition, substantial research effort will be made for the robustness and reliability of those algorithms to survive real-world testing in presence of imperfect image/camera calibration, outdoor/indoor light conditions, and various unseen scenes with noisy measurements. Extra security layer will also be developed to ensure safety as well as human intervention when necessary.

Resources required: 
Husky robots at Edinburgh Centre for Robotics, high-end PCs for control/planning, SLAM, and deep learning.
Project number: 
124023
First Supervisor: 
University: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Linux, C++, Python, knowledge in SLAM and machine learning, tensorflow, and experience with ROS.
Desirable skills and knowledge: 
PyTorch, experience with Physics engines (ODE, Bullet, Mujoco, Physx3.4), Physics simulators (Gazebo, Pybullet, VRep, Unity), experience with ROS and real robots
References: 

“The Ecommerce Fight for Last Mile Freight Delivery”, Supply Chain 24/7.