Mostrar o rexistro simple do ítem

dc.contributor.authorIglesias Morís, Daniel
dc.contributor.authorHervella, Álvaro S.
dc.contributor.authorRouco, J.
dc.contributor.authorNovo Buján, Jorge
dc.contributor.authorOrtega Hortas, Marcos
dc.date.accessioned2024-05-16T11:30:10Z
dc.date.available2024-05-16T11:30:10Z
dc.date.issued2021
dc.identifier.citationD. I. Morís, Á. S. Hervella, J. Rouco, J. Novo and M. Ortega, "Context encoder self-supervised approaches for eye fundus analysis," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9533567es_ES
dc.identifier.urihttp://hdl.handle.net/2183/36500
dc.description© 2021 IEEE. This version of the paper has been accepted for publication. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The final published paper is available online at: https://doi.org/10.1109/IJCNN52387.2021.9533567es_ES
dc.descriptionProceedings of the International Joint Conference on Neural NetworksVolume 2021-July18 July 2021 2021 International Joint Conference on Neural Networks, IJCNN 2021Virtual, Shenzhen, 18 July 2021 through 22 July 2021, Code 171891es_ES
dc.description.abstract[Abstract]: The broad availability of medical images in current clinical practice provides a source of large image datasets. In order to use these datasets for training deep neural networks in detection and segmentation tools, it is necessary to provide pixel-wise annotations associated to each image. However, the image annotation is a tedious, time consuming and error prone process that requires the participation of experienced specialists. In this work, we propose different complementary context encoder self-supervised approaches to learn relevant characteristics for the restricted medical imaging domain of retinographies. In particular, we propose a patch-wise approach, inspired in the previous proposal of broad domain context encoders, and complementary fully convolutional approaches. These approaches take advantage of the restricted application domain to learn the relevant features of the eye fundus, situation that can be extrapolated to many medical imaging issues. Different representative experiments were conducted in order to evaluate the performance of the trained models, demonstrating the suitability of the proposed approaches in the understanding of the eye fundus characteristics. The proposed self-supervised models can serve as reference to support other domain-related issues through transfer or multi-task learning paradigms, like the detection and evaluation of the retinal structures or anomaly detections in the context of pathological analysis.es_ES
dc.description.sponsorshipThis research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project, the predoctoral grant contract ref. ED481A-2017/328; Ministe-rio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; Axencia Galega de Innovación (GAIN), Xunta de Galicia, grant ref. IN845D 2020/38; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%).es_ES
dc.description.sponsorshipXunta de Galicia; ED481A-2017/328es_ES
dc.description.sponsorshipXunta de Galicia; ED431C 2020/24es_ES
dc.description.sponsorshipXunta de Galicia; IN845D 2020/38es_ES
dc.description.sponsorshipXunta de Galicia; ED431G 2019/01es_ES
dc.language.isoenges_ES
dc.publisherInstitute of Electrical and Electronics Engineers Inc.es_ES
dc.relationinfo:eu-repo/grantAgreement/MICINN/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/DTS18%2F00136/ES/Plataforma online para prevención y detección precoz de enfermedad vascular mediante análisis automatizado de información e imagen clínicaes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-095894-B-I00/ES/DESARROLLO DE TECNOLOGIAS INTELIGENTES PARA DIAGNOSTICO DE LA DMAE BASADAS EN EL ANALISIS AUTOMATICO DE NUEVAS MODALIDADES HETEROGENEAS DE ADQUISICION DE IMAGEN OFTALMOLOGICAes_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-108435RB-I00/ES/CUANTIFICACIÓN Y CARACTERIZACIÓN COMPUTACIONAL DE IMAGEN MULTIMODAL OFTALMOLÓGICA: ESTUDIOS EN ESCLEROSIS MÚLTIPLEes_ES
dc.relation.urihttps://doi.org/10.1109/IJCNN52387.2021.9533567es_ES
dc.rights© 2021 IEEE.es_ES
dc.subjectcomponentes_ES
dc.subjectformattinges_ES
dc.subjectinsertes_ES
dc.subjectstylees_ES
dc.subjectstylinges_ES
dc.titleContext encoder self-supervised approaches for eye fundus analysises_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.1109/IJCNN52387.2021.9533567
UDC.conferenceTitleIJCNN 2021es_ES


Ficheiros no ítem

Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem