Robust and Explainable Machine Learning for FinTech Applications
Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex functions from vast datasets. However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction, or classification, are normally not known.
Consequently, DNNs are often not used for high-impact decision support, as management is rarely provided with sufficient, transparent evidence to engender confidence or allow assessment of risk.
In contrast, Gaussian Processes (GPs) can be designed using highly principled methodologies, in which human knowledge and assumptions are explicitly recorded and exploited to provide parsimonious machine learning solutions. They are parsimonious in that they contain several orders of magnitude fewer parameters than DNN solutions, and these often directly map to the input data allowing explanations of their GP operation to be generated. In addition the uncertainty of results (e.g. the 95% confidence intervals on a GP prediction) is available due to the basic operation of GPs.
Thus, the main aim of this project is to develop advanced statistical machine learning and visualisation methods for financial applications that can provide mathematically sound and explainable predictions.
Second supervisors: Gareth Peters, Stefano Padilla