< Back to previous page

Project

Design and evaluation framework for eXplainable AI

 In recent years, we have witnessed the rapid adoption of AI to automate and solve different tasks. However, many of these systems do not explain their predictions to final users, which is critical to certain domains where decision-making needs to be well informed, such as medicine, finance and military. This problem has driven the development of eXplainable AI (XAI), and a diversity of methods have been proposed to construct explanations. One of the issues of current research in the area is that there is no widely adopted method to evaluate the goodness of these explanations. Most of the proposed systems are assessed by user studies, where each system is evaluated with its own questionnaire or tasks. This situation leads to two problems. First, there are not many evaluations because running user studies is time consuming and very expensive, and second, the systems or methods cannot be compared to each other because all the evaluations use different criteria. This research aims to close this gap by proposing an evaluation framework of XAI methods in supervised learning. This framework will include user study tasks and questionnaires to be applied according to the systems’ context. Moreover, this framework could potentially include metrics that measure the effectiveness of the explanation method without the need of users in the evaluation. We expect that this framework will make XAI research more scalable, and by doing so, AI systems will be adopted in more domains.

Date:11 Jan 2022 →  Today
Keywords:Explainable AI, Human-Computer Interaction
Disciplines:Artificial intelligence not elsewhere classified
Project type:PhD project