Efficient Implementation of Multilayer Perceptrons: Reducing Execution Time and Memory Consumption
Ver/Abrir
Use este enlace para citar
http://hdl.handle.net/2183/39122Colecciones
- Investigación (FIC) [1615]
Metadatos
Mostrar el registro completo del ítemTítulo
Efficient Implementation of Multilayer Perceptrons: Reducing Execution Time and Memory ConsumptionAutor(es)
Fecha
2024Cita bibliográfica
Cedron, F.; Alvarez-Gonzalez, S.; Ribas-Rodriguez, A.; Rodriguez-Yañez, S.; Porto-Pazos, A.B. Efficient Implementation of Multilayer Perceptrons: Reducing Execution Time and Memory Consumption. Appl. Sci. 2024, 14, 8020. https://doi.org/10.3390/app14178020
Resumen
[Abstract]: A technique is presented that reduces the required memory of neural networks through improving weight storage. In contrast to traditional methods, which have an exponential memory overhead with the increase in network size, the proposed method stores only the number of connections between neurons. The proposed method is evaluated on feedforward networks and demonstrates memory saving capabilities of up to almost 80% while also being more efficient, especially with larger architectures.
Palabras clave
neural networks
multilayer perceptron
compressed weight matrix
weight density
sparsity
multilayer perceptron
compressed weight matrix
weight density
sparsity
Descripción
Data is contained within the article.
Versión del editor
Derechos
Atribución 3.0 España