Sustainable Techniques to Improve Data Quality for Training Image-Based Explanatory Models for Recommender Systems

Bibliographic citation

Paz-Ruza, J., Esteban-Martínez, D., Alonso-Betanzos, A., Guijarro-Berdiñas, B. (2026). Sustainable Techniques to Improve Data Quality for Training Image-Based Explanatory Models for Recommender Systems. In: Senn, W., et al. Artificial Neural Networks and Machine Learning – ICANN 2025. ICANN 2025. Lecture Notes in Computer Science, vol 16070. Springer, Cham. https://doi.org/10.1007/978-3-032-04549-2_19

Type of academic work

Academic degree

Abstract

[Abstract]: Visual explanations based on user-uploaded images are an effective and self-contained approach to provide transparency to Recommender Systems (RS), but intrinsic limitations of data used in this explainability paradigm cause existing approaches to use bad quality training data that is highly sparse and suffers from labelling noise. Popular training enrichment approaches like model enlargement or massive data gathering are expensive and environmentally unsustainable, thus we seek to provide better visual explanations to RS aligning with the principles of Responsible AI. In this work, we research the intersection of effective and sustainable training enrichment strategies for visual-based RS explainability models by developing three novel strategies that focus on training Data Quality: 1) selection of reliable negative training examples using Positive-unlabelled Learning, 2) transform-based data augmentation, and 3) text-to-image generative-based data augmentation. The integration of these strategies in three state-of-the-art explainability models increases (5%) the performance in relevant ranking metrics of these visual-based RS explainability models without penalizing their practical long-term sustainability, as tested in multiple real-world restaurant recommendation explanation datasets

Description

Rights

© 2026 The Author(s), under exclusive license to Springer Nature Switzerland AG