Robust and Explainable Machine Learning for FinTech Applications

To develop and compare Gaussian Process models with Deep Neural Networks to provide explainable and quantifiable Machine Learning for FinTech applications.
Description of the Project: 

Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex functions from vast datasets.  However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction, or classification, are normally not known.

Consequently, DNNs are often not used for high-impact decision support, as management is rarely provided with sufficient, transparent evidence to engender confidence or allow assessment of risk.

In contrast, Gaussian Processes (GPs) can be designed using highly principled methodologies, in which human knowledge and assumptions are explicitly recorded and exploited to provide parsimonious machine learning solutions. They are parsimonious in that they contain several orders of magnitude fewer parameters than DNN solutions, and these often directly map to the input data allowing explanations of their GP operation to be generated. In addition the uncertainty of results (e.g. the 95% confidence intervals on a GP prediction) is available due to the basic operation of GPs.

Thus, the main aim of this project is to develop advanced statistical machine learning and visualisation methods for financial applications that can provide mathematically sound and explainable predictions.

Second supervisors: Gareth Peters, Stefano Padilla

 

First Supervisor: 
University: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Good programming skills. Curiosity as to how DNNs and GPs actually work.