Mostrar o rexistro simple do ítem

dc.contributor.authorRivas-Villar, David
dc.contributor.authorHervella, Álvaro S.
dc.contributor.authorRouco, J.
dc.contributor.authorNovo Buján, Jorge
dc.date.accessioned2024-06-27T12:28:00Z
dc.date.available2024-06-27T12:28:00Z
dc.date.issued2023-07-01
dc.identifier.citationD. Rivas-Villar, Á. S. Hervella, J. Rouco, y J. Novo, «Joint keypoint detection and description network for color fundus image registration», Quant Imaging Med Surg, vol. 13, n.o 7, pp. 4540-4562, jul. 2023, doi: 10.21037/qims-23-4.es_ES
dc.identifier.issn2223-4292
dc.identifier.issn2223-4306
dc.identifier.urihttp://hdl.handle.net/2183/37498
dc.description.abstract[Absctract]: Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method’s parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.es_ES
dc.description.sponsorshipThis work was supported by Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00, PID2019-108435RB-I00, TED2021-131201B-I00, and PDC2022-133132-I00 research projects; Consellería de Cultura, Educación e Universidade Xunta de Galicia through the Grupos de Referencia Competitiva grant (Ref. ED431C 2020/24), the predoctoral fellowship (Ref. ED481A 2021/147) and the postdoctoral fellowship (Ref. ED481B-2022-025); CITIC, Centro de Investigación de Galicia (Ref. ED431G 2019/01), itself received financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%).es_ES
dc.description.sponsorshipXunta de Galicia; ED431C 2020/24es_ES
dc.description.sponsorshipXunta de Galicia; ED481A 2021/147es_ES
dc.description.sponsorshipXunta de Galicia; ED481B-2022-025es_ES
dc.description.sponsorshipXunta de Galicia; ED431G 2019/01es_ES
dc.language.isoenges_ES
dc.publisherAME Publishing Companyes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-095894-B-I00/ES/DESARROLLO DE TECNOLOGIAS INTELIGENTES PARA DIAGNOSTICO DE LA DMAE BASADAS EN EL ANALISIS AUTOMATICO DE NUEVAS MODALIDADES HETEROGENEAS DE ADQUISICION DE IMAGEN OFTALMOLOGICAes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-108435RB-I00/ES/CUANTIFICACIÓN Y CARACTERIZACIÓN COMPUTACIONAL DE IMAGEN MULTIMODAL OFTALMOLÓGICA: ESTUDIOS EN ESCLEROSIS MÚLTIPLEes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/TED2021-131201B-I00/ES/DIAGNÓSTICO DIGITAL: TRANSFORMACIÓN DE LA DETECCIÓN DE ENFERMEDADES NEUROVASCULARES Y DEL TRATAMIENTO DE LOS PACIENTESes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2024/PDC2022-133132-I00/ES/MEJORAS EN EL DIAGNÓSTICO E INVESTIGACIÓN CLÍNICO MEDIANTE TECNOLOGÍAS INTELIGENTES APLICADAS LA IMAGEN OFTALMOLÓGICAes_ES
dc.relation.urihttps://doi.org/10.21037/qims-23-4es_ES
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Españaes_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subjectMedical image registrationes_ES
dc.subjectDeep learninges_ES
dc.subjectFeature-based registration (FBR)es_ES
dc.subjectRetinal imaginges_ES
dc.subjectOphthalmologyes_ES
dc.titleJoint keypoint detection and description network for color fundus image registrationes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleQuantitative Imaging in Medicine and Surgeryes_ES
UDC.volume13es_ES
UDC.issue7es_ES
UDC.startPage4540es_ES
UDC.endPage4562es_ES


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem