Robust and Explainable Machine Learning for NLP Applications
Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex language models from vast datasets (as typified by Bert). However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction cannot normally be mathematically derived.
Consequently, DNNs and RNNs are often not used for high-impact decision support particularly in regulated environments as management is rarely provided with sufficient, transparent evidence to engender confidence, allow assessment of risk, or guarantee outcomes.
In contrast, Gaussian Processes (GPs) can be designed using highly principled methodologies, in which human knowledge and assumptions are explicitly recorded and exploited to provide parsimonious machine learning solutions. They are parsimonious in that they contain several orders of magnitude fewer parameters than DNN solutions, and these often directly map to the input data allowing explanations of their GP operation to be generated. In addition the uncertainty of results (e.g. the 95% confidence intervals on a GP prediction) is available due to the basic operation of GPs. Recently we have extended these approaches to exploit time-series analysis for language modelling.
Thus, the main aim of this project is to develop new advanced statistical machine learning and visualisation methods for NLP (Natural Language Processing) applications that can provide mathematically sound and explainable predictions.
This project would suit a maths or physics graduate or student who wants to extend their skills in linear algebra and probabilistic/Bayesian machine learning techniques.
Second supervisors: Gareth Peters, Stefano Padilla