Joint keypoint detection and description network for color fundus image registration
Use este enlace para citar
http://hdl.handle.net/2183/37498
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución-NoComercial-SinDerivadas 3.0 España
Coleccións
- GI-VARPA - Artigos [75]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Joint keypoint detection and description network for color fundus image registrationData
2023-07-01Cita bibliográfica
D. Rivas-Villar, Á. S. Hervella, J. Rouco, y J. Novo, «Joint keypoint detection and description network for color fundus image registration», Quant Imaging Med Surg, vol. 13, n.o 7, pp. 4540-4562, jul. 2023, doi: 10.21037/qims-23-4.
Resumo
[Absctract]: Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach.
Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible.
Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method’s parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A).
Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.
Palabras chave
Medical image registration
Deep learning
Feature-based registration (FBR)
Retinal imaging
Ophthalmology
Deep learning
Feature-based registration (FBR)
Retinal imaging
Ophthalmology
Versión do editor
Dereitos
Atribución-NoComercial-SinDerivadas 3.0 España
ISSN
2223-4292
2223-4306
2223-4306