有别于LSA (Latent Semantic Analysis), 下列文章提出一种ESA (Explicit Semantic Analysis), 并介绍如何使用ESA来进行语义相关性和文本分类工作。 文章的基本思路其实也很简单,就是基于wikipedia网站内容,生成每一个曾经出现在wikipedia文章中的单词的语义表示。 每个单词的语义表示是一个高维向量, 而对应的每一个维就是wikipedia中的concept。基于单词的语义表示, 进一步可以得到文本串和文档的语义表示。如作者描述,这样的语义表示,对于短文本的语义处理很有帮助。并且,对于多义词,语义表示本身就可以提供消歧的可能。在上下文中,通过上下文词语的语义表示,多义词在该上下文中正确语义部分得到强化从而实现语义消歧。
Wikipedia-based Semantic Interpretation for Natural Language Processing
http://www.aaai.org/Papers/JAIR/Vol34/JAIR-3413.pdf
Evgeniy Gabrilovich and Shaul Markovitch.
Abstract
Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited
lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our
method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method
on text catego- rization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural
concepts, the ESA model is easy to explain to human users.
Explicit Semantic Analysis (ESA),布布扣,bubuko.com
Explicit Semantic Analysis (ESA)
原文地址:http://blog.csdn.net/three_body/article/details/30838823