FedHEONN: Federated and homomorphically encrypted learning method for one-layer neural networks
![Thumbnail](/dspace/bitstream/handle/2183/34296/FontenlaRomero_Oscar_2023_FedHEONN_Federated_and_homomorphically_encrypted_learning.pdf.jpg?sequence=5&isAllowed=y)
Ver/ abrir
Use este enlace para citar
http://hdl.handle.net/2183/34296
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución-NoComercial-SinDerivadas 4.0 International (CC BY-NC-ND)
Coleccións
- GI-LIDIA - Artigos [60]
Metadatos
Mostrar o rexistro completo do ítemTítulo
FedHEONN: Federated and homomorphically encrypted learning method for one-layer neural networksAutor(es)
Data
2023Cita bibliográfica
Ó. Fontenla-Romero, B. Guijarro-Berdiñas, E. Hernández-Pereira, and B. Pérez-Sánchez, "FedHEONN: Federated and homomorphically encrypted learning method for one-layer neural networks", Future Generation Computer Systems, vol. 149, 2023, P. 200-211, doi: 10.1016/j.future.2023.07.018
Resumo
[Abstract]: Federated learning (FL) is a distributed approach to developing collaborative learning models from decentralized data. This is relevant to many real applications, such as in the field of the Internet of Things, since the models can be used in edge computing devices. FL approaches are motivated by and designed to protect privacy, a highly relevant issue given current data protection regulations. Although FL methods are privacy-preserving by design, recently published papers show that privacy leaks do occur, caused by attacks designed to extract private data from information interchanged during learning. In this work, we present an FL method based on a neural network without hidden layers that incorporates homomorphic encryption (HE) to enhance robustness against the above-mentioned attacks. Unlike traditional FL methods that require multiple rounds of training for convergence, our method obtains the collaborative global model in a single training round, yielding an effective and efficient model that simplifies management of the FL training process. In addition, since our method includes HE, it is also robust against model inversion attacks. In experiments with big data sets and a large number of clients in a federated scenario, we demonstrate that use of HE does not affect the accuracy of the model, whose results are competitive with state-of-the-art machine learning models. We also show that behavior in terms of accuracy is the same for identically and non-identically distributed data scenarios.
Palabras chave
Edge computing
Federated learning
Homomorphic encryption
Neural networks
Privacy-preserving
Federated learning
Homomorphic encryption
Neural networks
Privacy-preserving
Descrición
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG
Versión do editor
Dereitos
Atribución-NoComercial-SinDerivadas 4.0 International (CC BY-NC-ND)