Quantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms

UDC.coleccionInvestigación
UDC.conferenceTitleIWANN 2025
UDC.departamentoCiencias da Computación e Tecnoloxías da Información
UDC.grupoInvLaboratorio de Investigación e Desenvolvemento en Intelixencia Artificial (LIDIA)
UDC.institutoCentroCITIC - Centro de Investigación de Tecnoloxías da Información e da Comunicación
dc.contributor.authorEsteban Martínez, David
dc.contributor.authorEiras-Franco, Carlos
dc.contributor.authorGuijarro-Berdiñas, Bertha
dc.contributor.authorAlonso-Betanzos, Amparo
dc.date.accessioned2025-11-11T09:15:27Z
dc.date.available2025-11-11T09:15:27Z
dc.date.issued2025-10-01
dc.descriptionTraballo presentado en: 18th International Work-Conference on Artificial Neural Networks, IWANN 2025, A Coruña, Spain, June 16–18, 2025 Part of the book series: Lecture Notes in Computer Science (LNCS,volume 16009), Included in the following conference series: International Work-Conference on Artificial Neural Networks This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-032-02728-3_48
dc.description.abstract[Abstract]: There is an increasingly urgent need to address the lack of transparency and clarity in the internal processes of AI (Artificial Intelligence) algorithms. In this paper, we explore local explainability techniques, LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), to create a new layer of explanations on top of any anomaly detection model. This layer helps human supervisors better understand model behavior and the rationale behind its classification decisions. To assess the quality of these explanations, we conducted a qualitative analysis through a survey and a quantitative analysis using Quantus, a robust Python toolkit for evaluating explainability. The results of our experiments underscore the subtle trade-offs among various explainability techniques and emphasize the importance of carefully considering the context when applying explainability techniques.
dc.description.sponsorshipThis research work is part of project PID2019-109238GB-C22 funded by MICIU/AEI/ 10.13039/501100011033 and by “ERDF A way of making Europe” as well as by Ministry for Digital Transformation and Civil Service and ‘Next-GenerationEU’/PRTR under Grant TSI-100925-2023-1, and by the Xunta de Galicia (Grant ED431C 2022/44) with the European Union ERDF funds. CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia”, supported in an 80% through ERDF Operational Programme Galicia 2014–2020, and the remaining 20% by “Secretaría Xeral de Universidades” (Grant ED431G 2023/01).
dc.description.sponsorshipXunta de Galicia; ED431C 2022/44
dc.description.sponsorshipXunta de Galicia; ED431G 2023/01
dc.identifier.citationEsteban-Martínez, D., Eiras-Franco, C., Guijarro-Berdiñas, B., Alonso-Betanzos, A. (2026). Quantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2025. Lecture Notes in Computer Science, vol 16009. Springer, Cham. https://doi.org/10.1007/978-3-032-02728-3_48
dc.identifier.doi10.1007/978-3-032-02728-3_48
dc.identifier.isbn978-3-032-02728-3
dc.identifier.issn1611-3349
dc.identifier.urihttps://hdl.handle.net/2183/46389
dc.language.isoeng
dc.publisherSpringer Nature
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-109238GB-C22/ES/APRENDIZAJE AUTOMATICO ESCALABLE Y EXPLICABLE/
dc.relation.projectIDinfo:eu-repo/grantAgreement/MTDPF//TSI-100925-2023-1/ES/CÁTEDRA UDC-INDITEX DE IA EN ALGORITMOS VERDES
dc.relation.urihttps://doi.org/10.1007/978-3-032-02728-3_48
dc.rightsCopyright © 2026, The Author(s), under exclusive license to Springer Nature Switzerland AG
dc.rights.accessRightsembargoed access
dc.subjectExplainable AI
dc.subjectMachine Learning
dc.subjectAnomaly detection
dc.subjectCategorical-numerical variables
dc.subjectExplainability Techniques
dc.subjectExplainability analysis tools
dc.titleQuantitative and Qualitative Evaluation on Local Explainability Models for Anomaly Detection Algorithms
dc.typeconference output
dspace.entity.typePublication
relation.isAuthorOfPublicationca60a4d3-b38f-4d91-bfa6-f855a8e171ab
relation.isAuthorOfPublicationd839396d-454e-4ccd-9322-d3e89a876865
relation.isAuthorOfPublicationa89f1cad-dbc5-471f-986a-26c021ed4a95
relation.isAuthorOfPublication.latestForDiscoveryca60a4d3-b38f-4d91-bfa6-f855a8e171ab

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
EirasFranco_Carlos_2026_Quantitative_and_Qualitative_Evaluation_on_Local_Explainability_Models.pdf
Size:
573.57 KB
Format:
Adobe Portable Document Format