< Back to previous page

Publication

Explanations for network embedding-based link predictions

Book Contribution - Book Chapter Conference Contribution

Graphs (also called networks) are powerful data abstractions, but they are challenging to work with, as many machine learning methods may not be applied to them directly. Network Embedding (NE) methods resolve this by learning vector representations for the nodes, for subsequent use in downstream machine-learning tasks. Link Prediction is one such important downstream task, used for example in recommender systems. NE methods perform exceedingly well in accuracy for Link Prediction, but predictions following from the embeddings, whose dimensions have no intrinsic meaning, are not straightforward to understand. Explaining why predictions are made can increase trustworthiness, help understand the underlying models and give insight into what features of the network are important in light of the predictions, and answer posed regulatory requirements on the ability to explain machine-learning-based decisions. We study the problem of providing explanations for NE-based link predictions and introduce ExplaiNE, an approach to derive counterfactual explanations by identifying links in the network that explain link predictions. We show how ExplaiNE can be used generically on NE-based methods and consider ExplaiNE in more detail for Conditional Network Embedding, a particularly suitable state-of-art NE method. Extensive experiments demonstrate ExplaiNE's accuracy and scalability.
Book: MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I
Volume: 1524
Pages: 473 - 488
Publication year:2021
Accessibility:Open