Parsing as Pretraining
Use this link to cite
http://hdl.handle.net/2183/24893Collections
- GI-LYS - Congresos, conferencias, etc. [71]
- OpenAIRE [368]
Metadata
Show full item recordTitle
Parsing as PretrainingDate
2020Citation
Vilares, David; Strzy, Michalina; Søgaard, Anders ; Gómez Rodríguez, Carlos. Parsing as Pretraining. Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
Abstract
[Abstract] Recent analyses suggest that encoders pretrained for language
modeling capture certain morpho-syntactic structure.
However, probing frameworks for word vectors still do not report
results on standard setups such as constituent and dependency
parsing. This paper addresses this problem and does
full parsing (on English) relying only on pretraining architectures
– and no decoding. We first cast constituent and dependency
parsing as sequence tagging. We then use a single
feed-forward layer to directly map word vectors to labels that
encode a linearized tree. This is used to: (i) see how far we can
reach on syntax modelling with just pretrained encoders, and
(ii) shed some light about the syntax-sensitivity of different
word vectors (by freezing the weights of the pretraining network
during training). For evaluation, we use bracketing F1-score and LAS, and analyze in-depth differences across representations for span lengths and dependency displacements. The overall results surpass existing sequence tagging parsers on the PTB (93.5%) and end-to-end EN-EWT UD (78.8%).
Keywords
Natural language processing
Parsing
Sequence labeling
Pretraining
Parsing
Sequence labeling
Pretraining