Ezquerro, AnaGómez-Rodríguez, CarlosVilares, David2025-03-032025-03-032025-03A. Ezquerro, C. Gómez-Rodríguez, and D. Vilares, "Better Benchmarking LLMs for Zero-Shot Dependency Parsing", Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), University of Tartu Library, pp. 121–135, March 3-4, 2025https://hdl.handle.net/10062/107204http://hdl.handle.net/2183/41295Presented at: Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 121–135, March 3-4, 2025 ©2025 University of Tartu LibraryCódigo asociado: https://github.com/anaezquerro/naipar[Abstract]: While LLMs excel in zero-shot tasks, their performance in linguistic challenges like syntactic parsing has been less scrutinized. This paper studies state-of-the-art openweight LLMs on the task by comparing them to baselines that do not have access to the input sentence, including baselines that have not been used in this context such as random projective trees or optimal linear arrangements. The results show that most of the tested LLMs cannot outperform the best uninformed baselines, with only the newest and largest versions of LLaMA doing so for most languages, and still achieving rather low performance. Thus, accurate zero-shot syntactic parsing is not forthcoming with open LLMs.engAtribución-NoComercial-SinDerivadas 3.0 Españahttp://creativecommons.org/licenses/by-nc-nd/3.0/es/LLMsLarge language modelsSyntactic parsingBetter Benchmarking LLMs for Zero-Shot Dependency Parsingconference outputopen access