Self-Supervised Multimodal Reconstruction of Retinal Images Over Paired Datasets
Ver/Abrir
Use este enlace para citar
http://hdl.handle.net/2183/26066
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial-SinDerivadas 4.0 Internacional
Colecciones
- GI-VARPA - Artigos [79]
Metadatos
Mostrar el registro completo del ítemTítulo
Self-Supervised Multimodal Reconstruction of Retinal Images Over Paired DatasetsFecha
2020-12-15Cita bibliográfica
Álvaro S. Hervella, José Rouco, Jorge Novo, Marcos Ortega, Self-supervised multimodal reconstruction of retinal images over paired datasets, Expert Systems with Applications, Volume 161, 2020, 113674, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2020.113674.
Resumen
[Abstract]
Data scarcity represents an important constraint for the training of deep neural networks in medical imaging. Medical image labeling, especially if pixel-level annotations are required, is an expensive task that needs expert intervention and usually results in a reduced number of annotated samples. In contrast, extensive amounts of unlabeled data are produced in the daily clinical practice, including paired multimodal images from patients that were subjected to multiple imaging tests. This work proposes a novel self-supervised multimodal reconstruction task that takes advantage of this unlabeled multimodal data for learning about the domain without human supervision. Paired multimodal data is a rich source of clinical information that can be naturally exploited by trying to estimate one image modality from others. This multimodal reconstruction requires the recognition of domain-specific patterns that can be used to complement the training of image analysis tasks in the same domain for which annotated data is scarce.
In this work, a set of experiments is performed using a multimodal setting of retinography and fluorescein angiography pairs that offer complementary information about the eye fundus. The evaluations performed on different public datasets, which include pathological and healthy data samples, demonstrate that a network trained for self-supervised multimodal reconstruction of angiography from retinography achieves unsupervised recognition of important retinal structures. These results indicate that the proposed self-supervised task provides relevant cues for image analysis tasks in the same domain.
Palabras clave
Self-supervised learning
Eye fundus
Deep learning
Multimodal
Retinography
Angiography
Eye fundus
Deep learning
Multimodal
Retinography
Angiography
Versión del editor
Derechos
Atribución-NoComercial-SinDerivadas 4.0 Internacional
ISSN
0957-4174
1873-6793
1873-6793