Sabrina McCallum

Research project title: 
Learning Grounded Representations from Multi-Modal Feedback and Interactions with Embodied Environments
Principal goal for project: 
To enable artificial agents to learn from visual, audio and language signals and generalize across different embodied environments
Research project: 

Learning complex, hierarchical tasks or diverse, open-ended tasks when rewards are sparse or there is no clear success criterion remains a challenge for RL agents. Manually crafting dense shaping rewards is non-trivial and even potentially infeasible for some environments, and choosing a good heuristic requires domain knowledge, typically resulting in task- and environment-specific solutions. This project explores alternative approaches which instead leverage information-rich language feedback and other multi-modal signals resulting directly from interactions of embodied agents with their environment.

About me: 

Before joining the CDT RAS, I completed an MSc in Artificial Intelligence from the University of Strathclyde, Glasgow, where I graduated with Distinction and at the top of my class. My MSc dissertation combined my interests in multi-modality, representation learning and affective computing. I previously worked as a Data Analyst and was involved in a range of Data Science projects for leading brands.