Ioannis Chalkiadakis

Research project title: 
Building trust in Recurrent Neural Networks through data-driven, human-interpretable visualizations.
Principal goal for project: 
Provide a data-driven, interactive visualization framework for LSTM networks for sentiment classification and evaluate it with a user-based experiment.
Research project: 

Given the success of neural networks in recent years, and especially after the success
of deep architectures, their use has been expanding to ever more critical application
areas such as security, autonomous driving, and healthcare. Contrary to previous well-
documented and thoroughly tested approaches, we still have little understanding of what
such models learn and when they could fail. The question that naturally arises is whether
we can trust such systems to undertake safety-critical tasks. Furthermore, in the light
of recent European Union directives (2016 General Data Protection Regulation, art.
22) that essentially require accountable models, companies employing such technologies
should be able to explain them in an understandable way to non-expert customers.

The current project (1st-year MSc. project) focuses on a sentiment classification task and aims to provide a
framework for a data-driven interpretation of the operation of a Long-Short-Term-
Memory Recurrent Neural Network. We believe that, given the difficulty in defining
and measuring the interpretability of neural network models, the evaluation of the latter
should initially focus around users, and later on a rigorous evaluation metric. Therefore,
we provide a critical evaluation of the framework based on our experience and a pilot
study, and set the guidelines for a complete user-based evaluation at a future stage.

Research Roadmap

Starting from this project, the research focus in the future will continue to be on interpretability in machine learning (ML) models in robotics. Research into rigorous mathematical explanation of a particular type of ML models (deep neural networks) is advancing fast, however their application in safety critical areas (healthcare, autonomous vehicles, privacy-demanding systems) calls for an understanding of these models not only by experts, but also by users without prior knowledge or expertise in the area.

How can we provide such understanding/interpretation that will allow users to trust the ML model they indirectly use, and the model's designers to improve it? This is a crucial question that needs to be considered before we expect people to trust, and hence actually use robotic systems. We aspire that our work will be a significant contribution towards answering that question.

 

 

Student type: 
Current student