Mostrar o rexistro simple do ítem
Joint keypoint detection and description network for color fundus image registration
dc.contributor.author | Rivas-Villar, David | |
dc.contributor.author | Hervella, Álvaro S. | |
dc.contributor.author | Rouco, J. | |
dc.contributor.author | Novo Buján, Jorge | |
dc.date.accessioned | 2024-06-27T12:28:00Z | |
dc.date.available | 2024-06-27T12:28:00Z | |
dc.date.issued | 2023-07-01 | |
dc.identifier.citation | D. Rivas-Villar, Á. S. Hervella, J. Rouco, y J. Novo, «Joint keypoint detection and description network for color fundus image registration», Quant Imaging Med Surg, vol. 13, n.o 7, pp. 4540-4562, jul. 2023, doi: 10.21037/qims-23-4. | es_ES |
dc.identifier.issn | 2223-4292 | |
dc.identifier.issn | 2223-4306 | |
dc.identifier.uri | http://hdl.handle.net/2183/37498 | |
dc.description.abstract | [Absctract]: Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method’s parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes. | es_ES |
dc.description.sponsorship | This work was supported by Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00, PID2019-108435RB-I00, TED2021-131201B-I00, and PDC2022-133132-I00 research projects; Consellería de Cultura, Educación e Universidade Xunta de Galicia through the Grupos de Referencia Competitiva grant (Ref. ED431C 2020/24), the predoctoral fellowship (Ref. ED481A 2021/147) and the postdoctoral fellowship (Ref. ED481B-2022-025); CITIC, Centro de Investigación de Galicia (Ref. ED431G 2019/01), itself received financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). | es_ES |
dc.description.sponsorship | Xunta de Galicia; ED431C 2020/24 | es_ES |
dc.description.sponsorship | Xunta de Galicia; ED481A 2021/147 | es_ES |
dc.description.sponsorship | Xunta de Galicia; ED481B-2022-025 | es_ES |
dc.description.sponsorship | Xunta de Galicia; ED431G 2019/01 | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | AME Publishing Company | es_ES |
dc.relation | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-095894-B-I00/ES/DESARROLLO DE TECNOLOGIAS INTELIGENTES PARA DIAGNOSTICO DE LA DMAE BASADAS EN EL ANALISIS AUTOMATICO DE NUEVAS MODALIDADES HETEROGENEAS DE ADQUISICION DE IMAGEN OFTALMOLOGICA | es_ES |
dc.relation | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-108435RB-I00/ES/CUANTIFICACIÓN Y CARACTERIZACIÓN COMPUTACIONAL DE IMAGEN MULTIMODAL OFTALMOLÓGICA: ESTUDIOS EN ESCLEROSIS MÚLTIPLE | es_ES |
dc.relation | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/TED2021-131201B-I00/ES/DIAGNÓSTICO DIGITAL: TRANSFORMACIÓN DE LA DETECCIÓN DE ENFERMEDADES NEUROVASCULARES Y DEL TRATAMIENTO DE LOS PACIENTES | es_ES |
dc.relation | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2024/PDC2022-133132-I00/ES/MEJORAS EN EL DIAGNÓSTICO E INVESTIGACIÓN CLÍNICO MEDIANTE TECNOLOGÍAS INTELIGENTES APLICADAS LA IMAGEN OFTALMOLÓGICA | es_ES |
dc.relation.uri | https://doi.org/10.21037/qims-23-4 | es_ES |
dc.rights | Atribución-NoComercial-SinDerivadas 3.0 España | es_ES |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/es/ | * |
dc.subject | Medical image registration | es_ES |
dc.subject | Deep learning | es_ES |
dc.subject | Feature-based registration (FBR) | es_ES |
dc.subject | Retinal imaging | es_ES |
dc.subject | Ophthalmology | es_ES |
dc.title | Joint keypoint detection and description network for color fundus image registration | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.rights.access | info:eu-repo/semantics/openAccess | es_ES |
UDC.journalTitle | Quantitative Imaging in Medicine and Surgery | es_ES |
UDC.volume | 13 | es_ES |
UDC.issue | 7 | es_ES |
UDC.startPage | 4540 | es_ES |
UDC.endPage | 4562 | es_ES |
Ficheiros no ítem
Este ítem aparece na(s) seguinte(s) colección(s)
-
GI-VARPA - Artigos [75]