< Back to previous page

Project

Explaining AI Models to Gain Insight into the Models and Learn about the World.

The project is about explaining the decisions made by Artificial Intelligence (AI) prediction models, and the use thereof to gain global insights into the models and knowledge of the world. Advances in AI are spurred mainly by deep learning (artificial neural networks) and the availability of massive image, textual and behavioural data. This has led to great predictive accuracies, with positive economical and societal implications, but also to very complex models. Explaining the predictions of such "black box" models has gained increasing attention of the AI research community. However, the current approaches and results only scratch the surface of the potential of this "explainable AI" research. The main objective of this proposal is to push the frontiers of the research by putting forward the Evidence Counterfactual (EdC) as a paradigm within explainable AI. The project will look at how the Evidence Counterfactual can be used to generate explanations that lead to novel insights into the AI model and the world (improve insight), and to validate the new methodologies in a variety of applications, ranging from insurance to political science. Trying to explain how things work is a central driver in science. In that context, this project is not only a fundamental but also a logical next step in AI research.
Date:1 Apr 2021 →  Today
Keywords:EXPLAINABLE ARTIFICIAL INTELLIGENCE
Disciplines:Data mining, Knowledge management, Artificial intelligence, Knowledge representation and machine learning