Reduced precision discretization based on information theory
Use este enlace para citar
http://hdl.handle.net/2183/32305
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución-NoComercial-SinDerivadas 3.0 España
Coleccións
- GI-LIDIA - Artigos [65]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Reduced precision discretization based on information theoryData
2022-01Cita bibliográfica
B. Ares, L. Morán-Fernández, y V. Bolón-Canedo, «Reduced precision discretization based on information theory», Procedia Computer Science, vol. 207, pp. 887-896, ene. 2022, doi: 10.1016/j.procs.2022.09.144.
Resumo
[Abstract] In recent years, new technological areas have emerged and proliferated, such as the Internet of Things or embedded systems in drones, which are usually characterized by making use of devices with strict requirements of weight, size, cost and power consumption. As a consequence, there has been a growing interest in the implementation of machine learning algorithms with reduced precision that can be embedded in these constrained devices. These algorithms cover not only learning, but they can also be applied to other stages such as feature selection or data discretization. In this work we study the behavior of the Minimum Description Length Principle (MDLP) discretizer, proposed by Fayyad and Irani, when reduced precision is used, and how much it affects to a typical machine learning pipeline. Experimental results show that the use of fixed-point format is sufficient to achieve performances similar to those obtained when using double-precision format, which opens the door to the use of reduced-precision discretizers in embedded systems, minimizing energy consumption and carbon emissions.
Palabras chave
Reduced precision
Discretization
Preprocessing
Mutual information
Machine learning
Discretization
Preprocessing
Mutual information
Machine learning
Versión do editor
Dereitos
Atribución-NoComercial-SinDerivadas 3.0 España
ISSN
1877-0509