Quantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms
| UDC.coleccion | Investigación | |
| UDC.conferenceTitle | IWANN 2025 | |
| UDC.departamento | Ciencias da Computación e Tecnoloxías da Información | |
| UDC.grupoInv | Laboratorio de Investigación e Desenvolvemento en Intelixencia Artificial (LIDIA) | |
| UDC.institutoCentro | CITIC - Centro de Investigación de Tecnoloxías da Información e da Comunicación | |
| dc.contributor.author | Esteban Martínez, David | |
| dc.contributor.author | Eiras-Franco, Carlos | |
| dc.contributor.author | Guijarro-Berdiñas, Bertha | |
| dc.contributor.author | Alonso-Betanzos, Amparo | |
| dc.date.accessioned | 2025-11-11T09:15:27Z | |
| dc.date.available | 2025-11-11T09:15:27Z | |
| dc.date.issued | 2025-10-01 | |
| dc.description | Traballo presentado en: 18th International Work-Conference on Artificial Neural Networks, IWANN 2025, A Coruña, Spain, June 16–18, 2025 Part of the book series: Lecture Notes in Computer Science (LNCS,volume 16009), Included in the following conference series: International Work-Conference on Artificial Neural Networks This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-032-02728-3_48 | |
| dc.description.abstract | [Abstract]: There is an increasingly urgent need to address the lack of transparency and clarity in the internal processes of AI (Artificial Intelligence) algorithms. In this paper, we explore local explainability techniques, LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), to create a new layer of explanations on top of any anomaly detection model. This layer helps human supervisors better understand model behavior and the rationale behind its classification decisions. To assess the quality of these explanations, we conducted a qualitative analysis through a survey and a quantitative analysis using Quantus, a robust Python toolkit for evaluating explainability. The results of our experiments underscore the subtle trade-offs among various explainability techniques and emphasize the importance of carefully considering the context when applying explainability techniques. | |
| dc.description.sponsorship | This research work is part of project PID2019-109238GB-C22 funded by MICIU/AEI/ 10.13039/501100011033 and by “ERDF A way of making Europe” as well as by Ministry for Digital Transformation and Civil Service and ‘Next-GenerationEU’/PRTR under Grant TSI-100925-2023-1, and by the Xunta de Galicia (Grant ED431C 2022/44) with the European Union ERDF funds. CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia”, supported in an 80% through ERDF Operational Programme Galicia 2014–2020, and the remaining 20% by “Secretaría Xeral de Universidades” (Grant ED431G 2023/01). | |
| dc.description.sponsorship | Xunta de Galicia; ED431C 2022/44 | |
| dc.description.sponsorship | Xunta de Galicia; ED431G 2023/01 | |
| dc.identifier.citation | Esteban-Martínez, D., Eiras-Franco, C., Guijarro-Berdiñas, B., Alonso-Betanzos, A. (2026). Quantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2025. Lecture Notes in Computer Science, vol 16009. Springer, Cham. https://doi.org/10.1007/978-3-032-02728-3_48 | |
| dc.identifier.doi | 10.1007/978-3-032-02728-3_48 | |
| dc.identifier.isbn | 978-3-032-02728-3 | |
| dc.identifier.issn | 1611-3349 | |
| dc.identifier.uri | https://hdl.handle.net/2183/46389 | |
| dc.language.iso | eng | |
| dc.publisher | Springer Nature | |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-109238GB-C22/ES/APRENDIZAJE AUTOMATICO ESCALABLE Y EXPLICABLE/ | |
| dc.relation.projectID | info:eu-repo/grantAgreement/MTDPF//TSI-100925-2023-1/ES/CÁTEDRA UDC-INDITEX DE IA EN ALGORITMOS VERDES | |
| dc.relation.uri | https://doi.org/10.1007/978-3-032-02728-3_48 | |
| dc.rights | Copyright © 2026, The Author(s), under exclusive license to Springer Nature Switzerland AG | |
| dc.rights.accessRights | embargoed access | |
| dc.subject | Explainable AI | |
| dc.subject | Machine Learning | |
| dc.subject | Anomaly detection | |
| dc.subject | Categorical-numerical variables | |
| dc.subject | Explainability Techniques | |
| dc.subject | Explainability analysis tools | |
| dc.title | Quantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms | |
| dc.type | conference output | |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | ca60a4d3-b38f-4d91-bfa6-f855a8e171ab | |
| relation.isAuthorOfPublication | d839396d-454e-4ccd-9322-d3e89a876865 | |
| relation.isAuthorOfPublication | a89f1cad-dbc5-471f-986a-26c021ed4a95 | |
| relation.isAuthorOfPublication.latestForDiscovery | ca60a4d3-b38f-4d91-bfa6-f855a8e171ab |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- EirasFranco_Carlos_2026_Quantitative_and_Qualitative_Evaluation_on_Local_Explainability_Models.pdf
- Size:
- 573.57 KB
- Format:
- Adobe Portable Document Format

