Paz Ruza, JorgeAlonso-Betanzos, AmparoGuijarro-BerdiƱas, BerthaCancela, BraisEiras-Franco, Carlos2024-07-052024-11J. Paz-Ruza, A. Alonso-Betanzos, B. Guijarro-BerdiƱas, B. Cancela, and C. Eiras-Franco, "Sustainable transparency on recommender systems: Bayesian ranking of images for explainability", Information Fusion, Vol. 111, Nov. 2024, article 102497, doi: 10.1016/j.inffus.2024.1024971566-2535http://hdl.handle.net/2183/37744[Abstract]: Recommender Systems have become crucial in the modern world, commonly guiding users towards relevant content or products, and having a large influence over the decisions of users and citizens. However, ensuring transparency and user trust in these systems remains a challenge; personalized explanations have emerged as a solution, offering justifications for recommendations. Among the existing approaches for generating personalized explanations, using existing visual content created by users is a promising option to maximize transparency and user trust. State-of-the-art models that follow this approach, despite leveraging highly optimized architectures, employ surrogate learning tasks that do not efficiently model the objective of ranking images as explanations for a given recommendation; this leads to a suboptimal training process with high computational costs that may not be reduced without affecting model performance. This work presents BRIE, a novel model where we leverage Bayesian Pairwise Ranking to enhance the training process, allowing us to consistently outperform state-of-the-art models in six real-world datasets while reducing its model size by up to 64 times and its CO2 emissions by up to 75% in training and inference.engAttribution 4.0 International (CC BY)http://creativecommons.org/licenses/by/3.0/es/Dyadic dataExplainable artificial intelligenceExplainable recommendationsFrugal AIMachine learningRecommender systemsSustainable transparency on recommender systems: Bayesian ranking of images for explainabilityjournal articleopen access10.1016/j.inffus.2024.102497