Use this link to cite:
http://hdl.handle.net/2183/148 Formal methods of tokenization for part-of-speech tagging
Loading...
Identifiers
Publication date
Authors
Advisors
Other responsabilities
Journal Title
Bibliographic citation
Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing (CICLING-2002), Ciudad de Méjico (Méjico). Published in Lecture Notes in Computer Science, vol. 2276, pp. 240-249. Springer Verlag. Gelbukh, A. (ed.).
Type of academic work
Academic degree
Abstract
[Abstract] One of the most important prior tasks for robust part-of-speech tagging is the correct tokenization or segmentation of the texts. This task can involve processes which are much more complex than the simple identification of the diferent sentences in the text and each of their individual components, but it is often obviated in many current applications. Nevertheless, this preprocessing step is an indispensable task in practice, and it is particularly dificult to tackle it with scientific precision with-out falling repeatedly in the analysis of the specific casuistry of every phenomenon detected. In this work, we have developed a scheme of preprocessing oriented towards the disambiguation and robust tagging of Galician. Nevertheless, it is a proposal of a general architecture that can be applied to other languages, such as Spanish, with very slight modifications.






