Enhancing Vision Transformer Explainability using Artificial Astrocytes
| UDC.coleccion | Investigación | |
| UDC.conferenceTitle | CVPRW 2025 | |
| UDC.departamento | Ciencias da Computación e Tecnoloxías da Información | |
| UDC.endPage | 64 | |
| UDC.grupoInv | Laboratorio de Enxeñaría do Software (ISLA) | |
| UDC.startPage | 58 | |
| dc.contributor.author | Echevarrieta-Catalan, Nicolas | |
| dc.contributor.author | Ribas-Rodríguez, Ana | |
| dc.contributor.author | Cedrón, Francisco | |
| dc.contributor.author | Schwartz, Odelia | |
| dc.contributor.author | Aguiar-Pulido, Vanessa | |
| dc.date.accessioned | 2025-10-22T07:23:52Z | |
| dc.date.available | 2025-10-22T07:23:52Z | |
| dc.date.issued | 2025-09 | |
| dc.description | Traballo presentado en: 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 11-12 de xuño de 2025, Nashville, Estados Unidos. © 2025 IEEE. This version of the article has been accepted for publication, after peer review. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The Version of Record is available online at: https://doi.org/10.1109/CVPRW67362.2025.00011 | |
| dc.description.abstract | [Abstract]: Machine learning models achieve high precision, but their decision-making processes often lack explainability. Furthermore, as model complexity increases, explainability typically decreases. Existing efforts to improve explainability primarily involve developing new eXplainable artificial intelligence (XAI) techniques or incorporating explainability constraints during training. While these approaches yield specific improvements, their applicability remains limited. In this work, we propose the Vision Transformer with artificial Astrocytes (ViTA). This training-free approach is inspired by neuroscience and enhances the reasoning of a pretrained deep neural network to generate more humanaligned explanations. We evaluated our approach employing two well-known XAI techniques, Grad-CAM and GradCAM++, and compared it to a standard Vision Transformer (ViT). Using the ClickMe dataset, we quantified the similarity between the heatmaps produced by the XAI techniques and a (human-aligned) ground truth. Our results consistently demonstrate that incorporating artificial astrocytes enhances the alignment of model explanations with human perception, leading to statistically significant improvements across all XAI techniques and metrics utilized. | |
| dc.identifier.citation | N. Echevarrieta-Catalan, A. Ribas-Rodriguez, F. Cedron, O. Schwartz and V. Aguiar-Pulido, "Enhancing Vision Transformer Explainability using Artificial Astrocytes," 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2025, pp. 58-64, doi: 10.1109/CVPRW67362.2025.00011 | |
| dc.identifier.doi | 10.1109/CVPRW67362.2025.00011 | |
| dc.identifier.isbn | 979-8-3315-9994-2 | |
| dc.identifier.issn | 2160-7516 | |
| dc.identifier.uri | https://hdl.handle.net/2183/46043 | |
| dc.language.iso | eng | |
| dc.publisher | IEEE | |
| dc.relation.uri | https://doi.org/10.1109/CVPRW67362.2025.00011 | |
| dc.rights | Copyright © 2025, IEEE | |
| dc.rights.accessRights | open access | |
| dc.subject | Explainability | |
| dc.subject | Xai | |
| dc.subject | Vision transformers | |
| dc.subject | Deep neural networks | |
| dc.subject | Computer vision | |
| dc.title | Enhancing Vision Transformer Explainability using Artificial Astrocytes | |
| dc.type | conference output | |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | c4435437-f4af-4d4e-b540-21f805457be2 | |
| relation.isAuthorOfPublication.latestForDiscovery | c4435437-f4af-4d4e-b540-21f805457be2 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Cedron_Francisco_2025_Enhancing_Vision_Transformer_Explainability.pdf
- Size:
- 5.55 MB
- Format:
- Adobe Portable Document Format

