Formal methods of tokenization for part-of-speech tagging
Ver/ abrir
Use este enlace para citar
http://hdl.handle.net/2183/148Coleccións
- GI-COLE - Artigos [10]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Formal methods of tokenization for part-of-speech taggingData
2002Cita bibliográfica
Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing (CICLING-2002), Ciudad de Méjico (Méjico). Published in Lecture Notes in Computer Science, vol. 2276, pp. 240-249. Springer Verlag. Gelbukh, A. (ed.).
Resumo
[Abstract] One of the most important prior tasks for robust part-of-speech tagging is the correct tokenization or segmentation of the texts. This task can involve processes which are much more complex than the simple identification of the diferent sentences in the text and each of their individual components, but it is often obviated in many current applications. Nevertheless, this preprocessing step is an indispensable task in practice, and it is particularly dificult to tackle it with scientific precision with-out falling repeatedly in the analysis of the specific casuistry of every phenomenon detected. In this work, we have developed a scheme of preprocessing oriented towards the disambiguation and robust tagging of Galician. Nevertheless, it is a proposal of a general architecture that can be applied to other languages, such as Spanish, with very slight modifications.
ISSN
0302-9743
ISBN
3-540-43219-1