For many of us, AI (systems) cannot be trusted, and there are so many questions regarding their decision-making process.
It is very normal to be worried about a system that is able to make decisions our behalf.
Despite the high performance in different fields, it is very hard to explain the process of many AI models. This is what is known as the "black box" problem.
How we could trust the actions of a system without having any knowledge of its internal working?
My current research is concerned with finding new approaches that could open the "black box" models.
I am a second year Ph.D. student at Heriot-Watt University advised by Professor Helen Hastie, where I'm affiliated with the Edinburgh Centre of Robotics. Previously, I completed my MSc in Artificial Intelligence at the University of Aberdeen, under the supervision of Professor Wei Pang.
Research Interests: Explainable AI, Machine Learning, Data Mining.