Image-to-image translation with Generative Adversarial Networks via retinal masks for realistic Optical Coherence Tomography imaging of Diabetic Macular Edema disorders

Use este enlace para citar
http://hdl.handle.net/2183/31834
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución-NoComercial-SinDerivadas 3.0 España
Coleccións
- Investigación (FIC) [1685]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Image-to-image translation with Generative Adversarial Networks via retinal masks for realistic Optical Coherence Tomography imaging of Diabetic Macular Edema disordersData
2023Cita bibliográfica
P. L. Vidal, J. de Moura, J. Novo, M. G. Penedo, y M. Ortega, «Image-to-image translation with Generative Adversarial Networks via retinal masks for realistic Optical Coherence Tomography imaging of Diabetic Macular Edema disorders», Biomedical Signal Processing and Control, vol. 79, p. 1, ene. 2023, doi: 10.1016/j.bspc.2022.104098.
Resumo
[Abstract]: One of the main issues with deep learning is the need of a significant number of samples. We intend to address this problem in the field of Optical Coherence Tomography (OCT), specifically in the context of Diabetic Macular Edema (DME). This pathology represents one of the main causes of blindness in developed countries and, due to the capturing difficulties and saturation of health services, the task of creating computer-aided diagnosis (CAD) systems is an arduous task. For this reason, we propose a solution to generate samples. Our strategy employs image-to-image Generative Adversarial Networks (GAN) to translate a binary mask into a realistic OCT image. Moreover, thanks to the clinical relationship between the retinal shape and the presence of DME fluid, we can generate both pathological and non-pathological samples by altering the binary mask morphology. To demonstrate the capabilities of our proposal, we test it against two classification strategies of the state-of-the-art. In the first one, we evaluate a system fully trained with generated images, obtaining 94.83% accuracy with respect to the state-of-the-art. In the second case, we tested it against a state-of-the-art expert model based on deep features, in which it also achieved successful results with a 98.23% of the accuracy of the original work. This way, our methodology proved to be useful in scenarios where data is scarce, and could be easily adapted to other imaging modalities and pathologies where key shape constraints in the image provide enough information to recreate realistic samples.
Palabras chave
Optical coherence tomography
Generative adversarial network
Image-to-image translation
Diabetic macular edema
Synthetic data
Generative adversarial network
Image-to-image translation
Diabetic macular edema
Synthetic data
Versión do editor
Dereitos
Atribución-NoComercial-SinDerivadas 3.0 España
ISSN
1746-8094