E2E-FS: An End-to-End Feature Selection Method for Neural Networks
Not available until 2025-07-15
Use this link to cite
http://hdl.handle.net/2183/34392Collections
- GI-LIDIA - Artigos [65]
Metadata
Show full item recordTitle
E2E-FS: An End-to-End Feature Selection Method for Neural NetworksDate
2023-07Citation
B. Cancela, V. Bolón-Canedo, y A. Alonso-Betanzos, «E2E-FS: An End-to-End Feature Selection Method for Neural Networks», IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, n.º 7, pp. 8311-8323, jul. 2023, doi: 10.1109/TPAMI.2022.3228824.
Abstract
[Abstract]: Classic embedded feature selection algorithms are often divided in two large groups: tree-based algorithms and LASSO variants. Both approaches are focused in different aspects: while the tree-based algorithms provide a clear explanation about which variables are being used to trigger a certain output, LASSO-like approaches sacrifice a detailed explanation in favor of increasing its accuracy. In this paper, we present a novel embedded feature selection algorithm, called End-to-End Feature Selection (E2E-FS), that aims to provide both accuracy and explainability in a clever way. Despite having non-convex regularization terms, our algorithm, similar to the LASSO approach, is solved with gradient descent techniques, introducing some restrictions that force the model to specifically select a maximum number of features that are going to be used subsequently by the classifier. Although these are hard restrictions, the experimental results obtained show that this algorithm can be used with any learning model that is trained using a gradient descent algorithm.
Keywords
Feature selection
End-to-end
Non-convex problem
End-to-end
Non-convex problem
Editor version
Rights
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available online at: https://doi.org/10.1109/TPAMI.2022.3228824
ISSN
0162-8828