Teun Krikke

Research project title: 
Deep Learning of Human Activity in Audio and Video streams
Principal goal for project: 
To develop algorithms which exploit recent advances in convolutional neural network learning to achieve patterns of behaviour recognition (e.g. emotion) in people from spoken words and images.
Research project: 

The project will have two main avenues of research:

  1. The demonstration of robust feature extraction from audio and video, independently, in order to classify emotional states and to recognise individuals in somewhat unconstrained environments. (This means noisy audio data and partially occluded faces in the image data.)
  2. To develop methods which exploit the common basis between these data sources i.e. that both signals are examples of a single emotional state from a single, unique individual.

The project will benefit from standardised datasets relevant to an open competition for recognition, and the use of open sourced libraries for the basic “deep learning” in (1).

About me: 

Research interests: Signal processing, vision, machine learning, human-robot interaction.