Part-of-speech tagging '''Part-of-speech tagging''' ('''POS tagging''' or '''POST'''), also called '''grammatical tagging''', is the process of marking up the words in a text as corresponding to a particular [[parts of speech|part of speech]], based on both its definition, as well as its context—i.e., relationship with adjacent and related words in a [[phrase]], [[sentence]], or [[paragraph]]. A simplified form of this is commonly taught school-age children, in the identification of words as [[noun]]s, [[verb]]s, [[adjective]]s, [[adverb]]s, etc. Once performed by hand, POS tagging is now done in the context of [[computational linguistics]], using [[algorithms]] which associate discrete terms, as well as hidden parts of speech, in accordance with a set of descriptive tags. ==History== Research on part-of-speech tagging has been closely tied to [[corpus linguistics]]. The first major corpus of English for computer analysis was the [[Brown Corpus]] developed at [[Brown University]] by [[Henry Kucera]] and [[Nelson Francis]], in the mid-1960s. It consists of about 1,000,000 words of running English prose text, made up of 500 samples from randomly chosen publications. Each sample is 2,000 or more words (ending at the first sentence-end after 2,000 words, so that the corpus contains only complete sentences). The [[Brown Corpus]] was painstakingly "tagged" with part-of-speech markers over many years. A first approximation was done with a program by Greene and Rubin, which consisted of a huge handmade list of what categories could co-occur at all. For example, article then noun can occur, but article verb (arguably) cannot. The program got about 70% correct. Its results were repeatedly reviewed and corrected by hand, and later users sent in errata, so that by the late 70s the tagging was nearly perfect (allowing for some cases even human speakers might not agree on). This corpus has been used for innumerable studies of word-frequency and of part-of-speech, and inspired the development of similar "tagged" corpora in many other languages. Statistics derived by analyzing it formed the basis for most later part-of-speech tagging systems, such as CLAWS and [[VOLSUNGA]]. However, by this time (2005) it has been superseded by larger corpora such as the 100 million word [[British National Corpus]]. For some time, part-of-speech tagging was considered an inseparable part of [[natural language processing]], because there are certain cases where the correct part of speech cannot be decided without understanding the [[semantics]] or even the [[pragmatics]] of the context. This is extremely expensive, especially because analyzing the higher levels is much harder when multiple part-of-speech possibilities must be considered for each word. In the mid 1980s, researchers in Europe began to use [[hidden Markov model]]s (HMMs) to disambiguate parts of speech, when working to tag the [[Lancaster-Oslo-Bergen Corpus]] of British English. HMMs involve counting cases (such as from the Brown Corpus), and making a table of the probabilities of certain sequences. For example, once you've seen an article such as 'the', perhaps the next word is a noun 40% of the time, an adjective 40%, and a number 20%. Knowing this, a program can decide that "can" in "the can" is far more likely to be a noun than a verb or a modal. The same method can of course be used to benefit from knowledge about following words. More advanced ("higher order") HMMs learn the probabilities not only of pairs, but triples or even larger sequences. So, for example, if you've just seen an article and a verb, the next item may be very likely a preposition, article, or noun, but even less likely another verb. When several ambiguous words occur together, the possibilities multiply. However, it is easy to enumerate every combination and to assign a relative probability to each one, by multiplying together the probabilities of each choice in turn. The combination with highest probability is then chosen. The European group developed CLAWS, a tagging program that did exactly this, and achieved accuracy in the 93-95% range. It is worth remembering, as [[Eugene Charniak]] points out in ''Statistical techniques for natural language parsing'' [http://www.cs.brown.edu/people/ec/home.html], that merely assigning the most common tag to each known word and the tag "proper noun" to all unknowns, will approach 90% accuracy because many words are unambiguous. CLAWS pioneered the field of HMM-based part of speech tagging, but was quite expensive since it enumerated all possibilities. It sometimes had to resort to backup methods when there were simply too many (the [[Brown Corpus]] contains a case with 17 ambiguous words in a row, and there are words such as "still" that can represent as many as 7 distinct parts of speech). In 1987, [[Steve DeRose]] and [[Ken Church]] independently developed [[dynamic programming]] algorithms to solve the same problem in vastly less time. Their methods were similar to the [[Viterbi algorithm]] known for some time in other fields. DeRose used a table of pairs, while Church used a table of triples and an ingenious method of estimating the values for triples that were rare or nonexistent in the Brown Corpus (actual measurement of triple probabilities would require a much larger corpus). Both methods achieved accuracy over 95%. DeRose's 1990 dissertation at [[Brown University]] included analyses of the specific error types, probabilities, and other related data, and replicated his work for Greek, where it proved similarly effective. These findings were surprisingly disruptive to the field of [[Natural Language Processing]]. The accuracy reported was higher than the typical accuracy of very sophisticated algorithms that integrated part of speech choice with many higher levels of linguistic analysis: syntax, morphology, semantics, and so on. CLAWS, DeRose's and Church's methods did fail for some of the known cases where semantics is required, but those proved negligibly rare. This convinced many in the field that part-of-speech tagging could usefully be separated out from the other levels of processing; this in turn simplified the theory and practice of computerized language analysis, and encouraged researchers to find ways to separate out other pieces as well. Markov Models are now the standard method for part-of-speech assignment. The methods already discussed involve working from a pre-existing corpus to learn tag probabilities. It is, however, also possible to [[Bootstrapping (linguistics)|bootstrap]] using "unsupervised" tagging. Unsupervised tagging techniques use an untagged corpus for their training data and produce the tagset by induction. That is, they observe patterns in word use, and derive part-of-speech categories themselves. For example, statistics readily reveal that "the", "a", and "an" occur in similar contexts, while "eat" occurs in very different ones. With sufficient iteration, similarity classes of words emerge that are remarkably similar to those human linguists would expect; and the differences themselves sometimes suggest valuable new insights. These two categories can be further subdivided into rule-based, stochastic, and neural approaches. Some current major algorithms for '''part-of-speech tagging''' include the [[Viterbi algorithm]], [[Brill Tagger]], and the [[Baum-Welch algorithm]] (also known as the forward-backward algorithm). [[Hidden Markov model]] and [[visible Markov model]] taggers can both be implemented using the [[Viterbi algorithm]]. ==See also== *[[Semantic net]] *[[word sense disambiguation]] ==References== *Charniak, Eugene. 1997. "Statistical Techniques for Natural Language Parsing". ''AI Magazine'' 18(4):33-44. *Hans van Halteren, Jakub Zavrel, Walter Daelemans. 2001. Improving Accuracy in NLP Through Combination of Machine Learning Systems. ''Computational Linguistics''. 27(2): 199-229. [http://acl.ldc.upenn.edu/J/J01/J01-2002.pdf PDF] ==External links== *[http://www-nlp.stanford.edu/links/statnlp.html#Taggers Overview of available taggers] *[http://www.comp.lancs.ac.uk/computing/research/ucrel/claws/ CLAWS] *[http://www.alias-i.com/lingpipe LingPipe] Java natural language processing software including trainable part-of-speech taggers with first-best, n-best and per-tag confidence output. *[http://opennlp.sourceforge.net/ OpenNLP Tagger] LGPL Tagger based on maxent maximum entropy package *[http://crftagger.sourceforge.net/ CRFTagger] Conditional Random Fields (CRFs) English POS Tagger *[http://jtextpro.sourceforge.net/ JTextPro] A Java-based Text Processing Toolkit [[Category:Natural language processing]] [[Category:Corpus linguistics]] [[ca:Desambiguació lèxica]] [[de:Part-of-speech_Tagging]] [[es:Part-of-speech tagging]]