Decentralized Data-Privacy Preserving Deep-Learning Approaches for Enhancing Inter-Database Generalization in Automatic Sleep Staging
![Thumbnail](/dspace/bitstream/handle/2183/37737/Alvarez_Estevez_Diego_2023_Decentralized_Data-Privacy_Preserving_Deep-Learning_Approaches_for_Enhancing_Inter-Database_Generalization_in_Automatic_Sleep_Staging.pdf.jpg?sequence=5&isAllowed=y)
Use este enlace para citar
http://hdl.handle.net/2183/37737
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución 4.0 Internacional
Coleccións
- II - Artigos [544]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Decentralized Data-Privacy Preserving Deep-Learning Approaches for Enhancing Inter-Database Generalization in Automatic Sleep StagingData
2023Cita bibliográfica
A. Anido-Alonso and D. Alvarez-Estevez, "Decentralized Data-Privacy Preserving Deep-Learning Approaches for Enhancing Inter-Database Generalization in Automatic Sleep Staging," in IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 11, pp. 5610-5621, Nov. 2023, doi: 10.1109/JBHI.2023.3310869.
Resumo
[Abstract]: Automatic sleep staging has been an active field of development. Despite multiple efforts, the area remains a focus of research interest. Indeed, while promising results have reported in past literature, uptake of automatic sleep scoring in the clinical setting remains low. One of the current issues regards the difficulty to generalization performance results beyond the local testing scenario, i.e. across data from different clinics. Issues derived from data-privacy restrictions, that generally apply in the medical domain, pose additional difficulties in the successful development of these methods. We propose the use of several decentralized deep-learning approaches, namely ensemble models and federated learning, for robust inter-database performance generalization and data-privacy preservation in automatic sleep staging scenario. Specifically, we explore four ensemble combination strategies (max-voting, output averaging, size-proportional weighting, and Nelder-Mead) and present a new federated learning algorithm, so-called sub-sampled federated stochastic gradient descent (ssFedSGD). To evaluate generalization capabilities of such approaches, experimental procedures are carried out using a leaving-one-database-out direct-transfer scenario on six independent and heterogeneous public sleep staging databases. The resulting performance is compared with respect to two baseline approaches involving single-database and centralized multiple-database derived models. Our results show that proposed decentralized learning methods outperform baseline local approaches, and provide similar generalization results to centralized database-combined approaches. We conclude that these methods are more preferable choices, as they come with additional advantages concerning improved scalability, flexible design, and data-privacy preservation.
Palabras chave
Data-privacy
Deep-learning
Domain adaption
Ensemble models
Federated learning
Inter-database generalization
Sleep staging
Deep-learning
Domain adaption
Ensemble models
Federated learning
Inter-database generalization
Sleep staging
Versión do editor
Dereitos
Atribución 4.0 Internacional
ISSN
2168-2194 (print)
2168-2208 (electronic)
2168-2208 (electronic)