Venue:
SR1
Lecturer:
Amir Reza Mohammadi - researcher@DBIS
Abstract:
Explainability in recommender systems is a critical yet challenging aspect of model evaluation and user trust. Among various explanation strategies, counterfactual explanations stand out for their intuitive and user-friendly nature. By demonstrating how minimal changes in the input data can alter the recommendation outcome, they provide clear insights into model behavior, making them especially appealing for user-facing applications.
In recent years, counterfactual explanations have gained significant traction in the research community, leading to the development of diverse methods and evaluation metrics. However, the effectiveness of these metrics in assessing counterfactual explainers has not been thoroughly examined, particularly in the context of the underlying recommender system's performance.
In this presentation, we systematically investigate the evaluation metrics used for counterfactual explanations in recommender systems. Through extensive experiments, we reveal a critical yet often overlooked issue: the quality of the recommender system significantly impacts the performance of counterfactual explainers. Specifically, failing to account for the recommender’s predictive accuracy can lead to misleading conclusions about the quality of the explanations.
Our findings highlight a crucial gap in current evaluation practices. We show that inconsistencies arise when the recommender system’s performance is not explicitly considered, leading to unreliable assessments of counterfactual explanation methods. To address this, we emphasize the importance of either reporting the recommender’s performance alongside explanation metrics or integrating it directly into the evaluation process. By doing so, we aim to pave the way for more robust and reliable evaluation of counterfactual explanation methods, ultimately enhancing transparency and trustworthiness in recommender systems.