Use this link to cite:
http://hdl.handle.net/2183/37987 ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration
Loading...
Identifiers
Publication date
Authors
Advisors
Other responsabilities
Journal Title
Bibliographic citation
Rivas-Villar, D., Hervella, Á.S., Rouco, J. et al. ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration. Med Biol Eng Comput (2024). https://doi.org/10.1007/s11517-024-03160-6
Type of academic work
Academic degree
Abstract
[Abstract]: Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration.
Description
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG
Editor version
Rights
Atribución 3.0 España








