The Fragility of Multi-Treebank Parsing Evaluation

Use this link to cite
http://hdl.handle.net/2183/36590Collections
- Investigación (FFIL) [877]
Metadata
Show full item recordTitle
The Fragility of Multi-Treebank Parsing EvaluationDate
2022-10Citation
Iago Alonso-Alonso, David Vilares, and Carlos Gómez-Rodríguez. 2022. The Fragility of Multi-Treebank Parsing Evaluation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5345–5359, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Abstract
[Absctract]: Treebank selection for parsing evaluation and the spurious effects that might arise from a biased choice have not been explored in detail. This paper studies how evaluating on a single subset of treebanks can lead to weak conclusions. First, we take a few contrasting parsers, and run them on subsets of treebanks proposed in previous work, whose use was justified (or not) on criteria such as typology or data scarcity. Second, we run a large-scale version of this experiment, create vast amounts of random subsets of treebanks, and compare on them many parsers whose scores are available. The results show substantial variability across subsets and that although establishing guidelines for good treebank selection is hard, some inadequate strategies can be easily avoided.
Keywords
Multi-treebank parsing evaluation
Treebank selection bias
Evaluation methodology
Parsing performance variability
Treebank selection bias
Evaluation methodology
Parsing performance variability
Description
Held in Gyeongju, Republic of Korea. October 12-17, 2022
Editor version
Rights
Atribución 3.0 España