Mostrar o rexistro simple do ítem

dc.contributor.authorMartínez-Río, Javier
dc.contributor.authorCarmona, Enrique J.
dc.contributor.authorCancelas, Daniel
dc.contributor.authorNovo Buján, Jorge
dc.contributor.authorOrtega Hortas, Marcos
dc.date.accessioned2024-06-27T08:02:58Z
dc.date.available2024-06-27T08:02:58Z
dc.date.issued2023-03-03
dc.identifier.citationMartínez-Río, J., Carmona, E.J., Cancelas, D. et al. Deformable registration of multimodal retinal images using a weakly supervised deep learning approach. Neural Comput & Applic 35, 14779–14797 (2023). https://doi.org/10.1007/s00521-023-08454-8es_ES
dc.identifier.issn1433-3058
dc.identifier.issn0941-0643
dc.identifier.urihttp://hdl.handle.net/2183/37459
dc.description.abstract[Absctract]: There are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.es_ES
dc.description.sponsorshipThis work was supported by the Ministerio de Ciencia, Innovación y Universidades, Government of Spain, through the RTI2018-095894-B-I00 research project. Some of the authors of this work also receive financial support from the European Social Fund through the predoctoral contract ref. PEJD-2019-PRE/TIC17030 and research assistant contract ref. PEJ-2019-AI/TIC-13771.es_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-095894-B-I00/ES/DESARROLLO DE TECNOLOGIAS INTELIGENTES PARA DIAGNOSTICO DE LA DMAE BASADAS EN EL ANALISIS AUTOMATICO DE NUEVAS MODALIDADES HETEROGENEAS DE ADQUISICION DE IMAGEN OFTALMOLOGICAes_ES
dc.relation.urihttps://doi.org/10.1007/s00521-023-08454-8es_ES
dc.rightsAtribución 3.0 Españaes_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectMultimodal image registrationes_ES
dc.subjectDiffeomorphic transformationes_ES
dc.subjectDeep learninges_ES
dc.subjectVoxelMorphes_ES
dc.subjectOCT angiographyes_ES
dc.subjectFluorescein angiographyes_ES
dc.titleDeformable registration of multimodal retinal images using a weakly supervised deep learning approaches_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleNeural Computing and Applicationses_ES
UDC.volume35es_ES
UDC.startPage14779es_ES
UDC.endPage14797es_ES


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem