Unsupervised Learning of Objects in Motion

The goal of this project is to develop systems that can learn to segment out object from videos with very little or no supervision at all.
Description of the Project: 

Objects play a central role in the behaviour of intelligent systems like robots. This makes semantic object segmentation a fundamental basic component of many applications. State-of-the-art object segmentation is often done training expensive networks with lots of labelled images. This means that new knowledge is expensive to acquire, since the network needs lots of labelled images of each new object. In addition, these networks take as input only single images, ignoring all the useful information from the  motion of objects.

In this project our goal is to make object segmentation work on video and on new objects. By including and leveraging motion in the segmentation problem we use additional constraints, that can help alleviate the need from labelled data. The motion may come from passive videos (where the camera cannot move), from simulated environments (like Gibson-env) or even from real robot interaction.

The underlying learning needs to incorporate so-called few-shot learning, to avoid the current need for extensive labelled data. This means that our system needs to be able to create representations of objects general enough so that new concepts can be easily learned. Another necessary component will be reinforcement learning, since  disambiguation of object category and object geometry often requires ego movement and basic interaction. 

Resources required: 
GPU equipment for running experiments and simulations. Any robot of the student’s choice that has an arm.
Project number: 
First Supervisor: 
University of Edinburgh
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Experience and understanding of basic deep learning concepts. Experience with TensorFlow or pyTorch.
Desirable skills and knowledge: 
Experience with few-shot learning and reinforcement learning. Knowledge of object segmentation and optical flow estimation.
Funding Available: