The dichotomy of emotions versus reason has been part of western thinking since at least the time of the philosopher Plato from Ancient Greece. Our scientific understanding of how emotion works has evolved past this dichotomy to a view of emotion as a strongly integrated part of, and central to, cognitive processing. There is now a view of emotions as an adaption of the survival circuits in our brain to handle more complex tasks.
So far, research in artificial cognitive systems has barely made the move towards including emotions and the systems that do tend to use it in a non-central way.. Some have recognised the bodily substrate of emotions and their learning aspects but still have a limited influence on other aspects of the cognitive system or are just emotion based, that being the only mechanism at play for driving behaviour.
This research aims to sketch a path towards an artificial cognitive system with emotion as a central component. It aims to work towards an emotion system based on the body, with the full range of learning capabilities known to be present in humans (e.g. associative learning of neutral stimulus with emotional responses), and with the full range of inputs and outputs to other cognitive functions. It also aims to explore some of the interfaces between the core emotion system and cognitive processes like attention, memory, and reasoning.
(MSc) Towards a Mutual Gaze Controller for iCub: Markov Modelling of Human gaze. Supervised by Frank Broz
Why is gaze so important? Why do we want robots to have more human like gaze? And how can we achieve this?
Psychology has discovered that gaze has an important part in human social interaction, and importance that extends to all primates. When robots take on human shape and roles the expectation that they will follow social convention increases. We present an overview of the literature on efforts to make the behaviour, especially gaze behaviour, of robots more human like during social interactions, mostly one-on-one..
This project uses data gathered previously of face to face conversations (Broz et al 2012) and the probabilistic tool of Markov chains to design a gaze controller that we hope exhibits more human-like gaze behaviour. We lay out the methods and techniques we use to go from data to controller design and a possible future implementation on the iCub robot. From clustering and labelling the clusters to the use of state transition counting for gathering the controller parameters, as well as how we create time histograms from the data..
Our results are mainly in furthering the understanding of the data and insights to from it. We have found there is plenty of variability between people. We have found there are quite variable and perceptible offsets in the gaze date in relation to the image extracted features of the original data. We confirmed our expectation that when looking at the partner’s face the focus is mostly the eyes and mouth. We found that, against some of our initial expectations, the time histograms are very close to the exponential drop-off expected of a Markov process. We found support for the hypothesis that people look more at the mouth when the partner is talking and we found that the extra attention comes mostly from the time spent looking away from the face. We put forward, based on the analysis of the controller transition counts, the hypothesis that the mouth may be used in predicting when a person needs to give away their speaking turn..