Skip navigation
  •  Inicio
  • UDC 
    • Cómo depositar
    • Políticas do RUC
    • FAQ
    • Dereitos de Autor
    • Máis información en INFOguías UDC
  • Percorrer 
    • Comunidades
    • Buscar por:
    • Data de publicación
    • Autor
    • Título
    • Materia
  • Axuda
    • español
    • Gallegan
    • English
  • Acceder
  •  Galego 
    • Español
    • Galego
    • English
  
Ver ítem 
  •   RUC
  • Facultade de Informática
  • Investigación (FIC)
  • Ver ítem
  •   RUC
  • Facultade de Informática
  • Investigación (FIC)
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Deformable registration of multimodal retinal images using a weakly supervised deep learning approach

Thumbnail
Ver/abrir
MartinezRio_Javier_2023_Deformable_regist_multim_retinal_images.pdf (4.122Mb)
Use este enlace para citar
http://hdl.handle.net/2183/37459
Atribución 3.0 España
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución 3.0 España
Coleccións
  • Investigación (FIC) [1730]
Metadatos
Mostrar o rexistro completo do ítem
Título
Deformable registration of multimodal retinal images using a weakly supervised deep learning approach
Autor(es)
Martínez-Río, Javier
Carmona, Enrique J.
Cancelas, Daniel
Novo Buján, Jorge
Ortega Hortas, Marcos
Data
2023-03-03
Cita bibliográfica
Martínez-Río, J., Carmona, E.J., Cancelas, D. et al. Deformable registration of multimodal retinal images using a weakly supervised deep learning approach. Neural Comput & Applic 35, 14779–14797 (2023). https://doi.org/10.1007/s00521-023-08454-8
Resumo
[Absctract]: There are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.
Palabras chave
Multimodal image registration
Diffeomorphic transformation
Deep learning
VoxelMorph
OCT angiography
Fluorescein angiography
 
Versión do editor
https://doi.org/10.1007/s00521-023-08454-8
Dereitos
Atribución 3.0 España
ISSN
1433-3058
0941-0643
 

Listar

Todo RUCComunidades e colecciónsPor data de publicaciónAutoresTítulosMateriasGrupo de InvestigaciónTitulaciónEsta colecciónPor data de publicaciónAutoresTítulosMateriasGrupo de InvestigaciónTitulación

A miña conta

AccederRexistro

Estatísticas

Ver Estatísticas de uso
Sherpa
OpenArchives
OAIster
Scholar Google
UNIVERSIDADE DA CORUÑA. Servizo de Biblioteca.    DSpace Software Copyright © 2002-2013 Duraspace - Suxestións