Incremental Learning for Robotic Lifelong Learning

To develop incremental learning techniques that support lifelong learning in robotics.
Description of the Project: 

When robots are co-located with people then safe interaction between them requires that the robot has reliable and up to date information about its environment and the locations of all persons and objects within it. The safety of any actions undertaken by the robot need to be assessed against the current state of its world and so the robot’s model of the world must be regularly updated.

Robots that are deployed for service over extended periods of time need to be able to adapt to environmental changes and learn new behaviours during the course of their service lifetime. Key to lifelong learning in robotics is the ability to accumulate new knowledge over time. The vast majority of machine learning techniques require a complete retraining of the system when new information needs to be incorporated. This is both inefficient and often practically unacceptable.

It is manifestly obvious that human learning does not occur in this all-or-nothing manner and a number of approaches have been developed to facilitate incremental machine learning, with varying degrees of success. The most promising require repeated growth and pruning of the learnt knowledge base and this PhD research will need to identify suitable parameters to constrain these processes, which are likely to depend on the robot’s environment and tasks.

Furthermore, resolution mechanisms are required when new knowledge conflicts with previously acquired knowledge. Jean Piaget [1] identified two mutually exclusive processes that are at work in human learning; assimilation and accommodation. The former gives precedence to the old knowledge and the latter to the new. Both are needed but the decision on which to deploy when will be context dependent and require careful analysis.

Project number: 
240010
First Supervisor: 
University: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Machine learning
Desirable skills and knowledge: 
Programming and ROS
References: 

[1] J Piaget & B Inhelder, 1969, ‘The Psychology of the Child’. Routledge & Kegan Paul: London.

[2] M Andrecki & N K Taylor, 2018, ‘Learning to Track Environment State’. Joint Industry and Robotics CDTs Symposium : JIRCS 2018. Bristol UK.

[3] T López Guevara, N K Taylor, M U Gutmann, S Ramamoorthy & K Subr, 2017, ‘Adaptable Pouring: Teaching 

Robots Not to Spill using Fast but Approximate Fluid Simulation’. 1st Annual Conference on Robot Learning : CoRL 2017. Mountain View USA.

[4] S M Gallacher, E Papadopoulou, N K Taylor & M H Williams, 2013, ‘Learning User Preferences for Adaptive Pervasive Environments: An Incremental and Temporal approach’. ACM Transactions on Autonomous and Adaptive Systems, 8 (1), pp.5:1-5:26.

[5] S M Gallacher, 2011, ‘Learning Preferences for Personalisation in a Pervasive Environment’. PhD Thesis, Heriot-Watt University.