Sabrina McCallum

Research project title: 
Learning to Ground Embodied Interactions in Multi-Modal Feedback
Principal goal for project: 
To enable artificial agents to leverage visual, audio and language signals to robustly complete tasks in unseen embodied environments
Research project: 

Learning to complete complex or open-ended tasks when rewards are sparse or there is no clear success criterion remains a challenge for embodied agents. Manually crafting or learning robust, generalisable reward functions is non-trivial and requires domain knowledge, typically resulting in task- and environment-specific solutions. This is especially problematic when the environment is dynamic, non-deterministic or includes multiple agents. This project explores alternative approaches which instead leverage agents' ability to understand the outcome of their actions given raw, information-rich signals such as language feedback, visual observations and environment sounds, and adjust their behaviour accordingly. For a list of publications, please refer to my Google Scholar profile.

About me: 

Before joining the CDT RAS, I completed an MSc in Artificial Intelligence from the University of Strathclyde, Glasgow, where I graduated with Distinction and at the top of my class. My MSc dissertation combined my interests in multi-modality, representation learning and affective computing. I previously worked as a Data Analyst and was involved in a range of Data Science projects for leading brands.