< Back to previous page

Project

Interpretable Qualitative Evaluation for Online Recommender Systems.

Individuals often rely on recommendations provided by others in making routine, daily decisions. Algorithms, mimicking this behaviour, are vital to the success of e-commerce services. However, a remaining open question is why algorithms make these recommendations. This is problematic given that, the most accurate machine learning algorithms are black-box models, and we have a dynamic environment were possibly multiple models are deployed and periodically re-trained. Since any organisation requires human oversight and decision-making, there is a need for insight into user behaviour and interactions with recommendations made by black-box machine learning algorithms. Traditionally, two recommender systems are compared based on a single metric, such as click-through-rate after an A/B test. We will assess the performance of online recommender systems qualitatively by uncovering patterns that are characteristic for the differences in targeted users and items. We propose to adopt interpretable machine learning, where the goal is to produce explanations that can be used to guide processes of human understanding and decisions. We propose to mine interpretable association rules and generate, possibly grouped, counterfactual explanations why recommender system A performs better (or worse) than recommender system B.
Date:1 Oct 2020 →  30 Sep 2021
Keywords:MACHINE LEARNING, DATA MINING
Disciplines:Data mining, Machine learning and decision making