标签:
词干化处理:
在NLP中,我们对一句话或一个文档分词之后,一般要进行词干化处理。词干化处理就是把一些名词的复数去掉,动词的不同时态去掉等等类似的处理。
对于切词得到的英文单词要进行词干化处理,主要包括将名词的复数变为单数和将动词的其他形态变为基本形态。对动词的词干化可以使用 Porter 算法[5]。
举个例子说明:用的python中的 from nltk.stem.snowball import SnowballStemmer
def tokenize_and_stem(self,text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it‘s own token tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] print tokens filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search(‘[a-zA-Z]‘, token): filtered_tokens.append(token) stems = [stemmer.stem(t) for t in filtered_tokens] print stems
结果:
[‘hello‘, ‘,‘, ‘what‘, ‘are‘, ‘you‘, ‘doing‘, ‘now‘, ‘,‘, ‘i‘, ‘want‘, ‘to‘, ‘go‘, ‘to‘, ‘school‘, ‘,‘, ‘by‘, ‘some‘, ‘fruits‘] [u‘hello‘, u‘what‘, u‘are‘, u‘you‘, u‘do‘, u‘now‘, ‘i‘, u‘want‘, ‘to‘, ‘go‘, ‘to‘, u‘school‘, ‘by‘, u‘some‘, u‘fruit‘]
从上面对比可以看出:doing——>do fruits——>fruit 逗号也去掉了
标签:
原文地址:http://www.cnblogs.com/lovychen/p/5760939.html