Project
On explaining the solutions of constrained optimisation problems
In recent years, eXplainable AI (XAI) has come a lot to the attention. The research line aims to develop techniques in clarifying solutions or models found by modern AI systems. This project aims to provide comprehensible explanations of models found by constraint optimization solvers. In particular we focus on the explanation of optimality of those models. This means finding solutions to questions a user might have about a model. Examples of these are 'Why has variable x value a?', 'What are alternative solutions?' or 'Why didn't the solver go for the other model?'. One of the key challenges of the project arises from the range of questions a user can ask about a model. Therefore, one of the objectives is determining this set of questions and build efficient algorithms for answering them.