For many of us, AI (systems) cannot be trusted, and there are so many questions regarding their decision-making process.
It is very normal to be worried about a system that is able to make decisions our behalf.
Despite the high performance in different fields, it is very hard to explain the process of many AI models. This is what is known as the "black box" problem.
How we could trust the actions of a system without having any knowledge of its internal working?
My current research is concerned with finding new approaches that could open the "black box" models.