Absolute convergence and error thresholds in non-active adaptive sampling
Use este enlace para citar
http://hdl.handle.net/2183/31198
Excepto si se señala otra cosa, la licencia del ítem se describe como Creative Commons Attribution license (CC BY-NC-ND 4.0)
Colecciones
- GI-LYS - Artigos [51]
Metadatos
Mostrar el registro completo del ítemTítulo
Absolute convergence and error thresholds in non-active adaptive samplingFecha
2022-05Cita bibliográfica
M. Vilares Ferro, V.M. Darriba Bilbao, J. Vilares, Absolute convergence and error thresholds in non-active adaptive sampling, J. Comput. Syst. Sci. 129 (2022) 39-61. http://dx.doi.org/10.1016/j.jcss.2022.05.002
Resumen
[Abstract] Non-active adaptive sampling is a way of building machine learning models from a training data base which are supposed to dynamically and automatically derive guaranteed sample size. In this context and regardless of the strategy used in both scheduling and generating of weak predictors, a proposal for calculating absolute convergence and error thresholds is described. We not only make it possible to establish when the quality of the model no longer increases, but also supplies a proximity condition to estimate in absolute terms how close it is to achieving such a goal, thus supporting decision making for fine-tuning learning parameters in model selection. The technique proves its correctness and completeness with respect to our working hypotheses, in addition to strengthening the robustness of the sampling scheme. Tests meet our expectations and illustrate the proposal in the domain of natural language processing, taking the generation of part-of-speech taggers as case study.
Palabras clave
Machine learning convergence
Non-active adaptive sampling
Pos tagging
Non-active adaptive sampling
Pos tagging
Versión del editor
Derechos
Creative Commons Attribution license (CC BY-NC-ND 4.0)
ISSN
0022-0000