High/Low Quality Style Transfer for Mutual Conversion of OCT Images Using Contrastive Unpaired Translation Generative Adversarial Networks
Ver/ abrir
Use este enlace para citar
http://hdl.handle.net/2183/36424Coleccións
Metadatos
Mostrar o rexistro completo do ítemTítulo
High/Low Quality Style Transfer for Mutual Conversion of OCT Images Using Contrastive Unpaired Translation Generative Adversarial NetworksData
2022-05-15Cita bibliográfica
Gende, M., de Moura, J., Novo, J., Ortega, M. (2022). High/Low Quality Style Transfer for Mutual Conversion of OCT Images Using Contrastive Unpaired Translation Generative Adversarial Networks. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13231. Springer, Cham. https://doi.org/10.1007/978-3-031-06427-2_18
Resumo
[Absctract]: Recent advances in artificial intelligence and deep learning models are contributing to the development of advanced computer-aided diagnosis (CAD) systems. In the context of medical imaging, Optical Coherence Tomography (OCT) is a valuable technique that is able to provide cross-sectional visualisations of the ocular tissue. However, OCT is constrained by a limitation between the quality of the visualisations that it can produce and the overall amount of tissue that can be analysed at once. This limitation leads to a scarcity of high quality data, a problem that is very prevalent when developing machine learning-based CAD systems intended for medical imaging. To mitigate this problem, we present a novel methodology for the unpaired conversion of OCT images acquired with a low quality extensive scanning preset into the visual style of those taken with a high quality intensive scan and vice versa. This is achieved by employing contrastive unpaired translation generative adversarial networks to convert between the visual styles of the different acquisition presets. The results we obtained in the validation experiments show that these synthetic generated images can mirror the visual features of the original ones while preserving the natural tissue texture, effectively increasing the total number of available samples that can be used to train robust machine learning-based CAD systems.
Palabras chave
Computer-aided Diagnosis
Optical Coherence Tomography
Epiretinal Membrane
Segmentation
Deep Learning
Optical Coherence Tomography
Epiretinal Membrane
Segmentation
Deep Learning
Descrición
21st International Conference, Lecce, Italy, May 23–27, 2022
Versión do editor
Dereitos
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
ISSN
1611-3349
ISBN
978-3-031-06427-2 978-3-031-06426-5