Absolute convergence and error thresholds in non-active adaptive sampling
Use este enlace para citar
http://hdl.handle.net/2183/31198
A non ser que se indique outra cousa, a licenza do ítem descríbese como Creative Commons Attribution license (CC BY-NC-ND 4.0)
Coleccións
- GI-LYS - Artigos [51]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Absolute convergence and error thresholds in non-active adaptive samplingData
2022-05Cita bibliográfica
M. Vilares Ferro, V.M. Darriba Bilbao, J. Vilares, Absolute convergence and error thresholds in non-active adaptive sampling, J. Comput. Syst. Sci. 129 (2022) 39-61. http://dx.doi.org/10.1016/j.jcss.2022.05.002
Resumo
[Abstract] Non-active adaptive sampling is a way of building machine learning models from a training data base which are supposed to dynamically and automatically derive guaranteed sample size. In this context and regardless of the strategy used in both scheduling and generating of weak predictors, a proposal for calculating absolute convergence and error thresholds is described. We not only make it possible to establish when the quality of the model no longer increases, but also supplies a proximity condition to estimate in absolute terms how close it is to achieving such a goal, thus supporting decision making for fine-tuning learning parameters in model selection. The technique proves its correctness and completeness with respect to our working hypotheses, in addition to strengthening the robustness of the sampling scheme. Tests meet our expectations and illustrate the proposal in the domain of natural language processing, taking the generation of part-of-speech taggers as case study.
Palabras chave
Machine learning convergence
Non-active adaptive sampling
Pos tagging
Non-active adaptive sampling
Pos tagging
Versión do editor
Dereitos
Creative Commons Attribution license (CC BY-NC-ND 4.0)
ISSN
0022-0000