码迷,mamicode.com
首页 > 编程语言 > 详细

自然语言处理第一讲:简介和概述

时间:2017-02-26 19:18:32      阅读:203      评论:0      收藏:0      [点我收藏+]

标签:学习分类   sts   工程   tween   represent   each   复杂   表达   home   

这堂课将要回答的问题:
1、什么是自然语言处理?
2、为什么自然语言处理比较难?
3、我们能够构建一个可以从文本中学习的程序吗?
4、这门课程将包含哪些内容?
 
一、 什么是自然语言处理
1、计算机将自然语言作为输入或输出:
  输入对应的是自然语言理解;
  输出对应的是自然语言生成;
2、关于NLP的多种观点:
  A、人类语言处理的计算模型:
  ——程序内部按人类行为方式操作
  B、 人类交流的计算模型:
  ——程序像人类一样交互
  C、有效处理文本和语音的计算系统
3、NLP的应用:
  A、“宝贝鱼”机器翻译(Machine Translation with Babel Fish)
  B、MIT翻译系统(MIT Translation System)
  C、文本摘要(Text Summarization)
  D、对话系统(Dialogue Systems)
  E、其他应用(Other NLP Applications):
  ——语法检查(Grammar Checking)
  ——情绪分类(Sentiment Classification)
  ——ETS作文评分(ETS Essay Scoring)
 
二、 为什么自然语言处理比较难?
1、 歧义(Ambiguity)
 “At last, a computer that understands you like your mother”
对于这句话的理解:
  A、 它理解你就像你的母亲理解你一样(It understands you as well as your mother understands you);
  B、 它理解你喜欢你的母亲(It understands (that) you like your mother);
  C、 它理解你就像理解你的母亲一样(It understands you as well as it understands your mother)
  D、 我们来看看Google的翻译:最后,一台计算机能够理解你喜欢你的母亲(译者附加上去的,看上去Google的理解更像B)。
 A到C这三种理解好还是不好呢?(1 and 3: Does this mean well, or poorly?)
2、 不同层次的歧义(Ambiguity at Many Levels)
  A、 声音层次的歧义——语音识别:
  ——“ ... a computer that understands you like your mother”
  ——“ ... a computer that understands you lie cured mother”
  B、 句法层次的歧义:
   不同的结构导致不同的解释
   更多的句法歧义例子
  C、 语义(意义)层次的歧义:
   Two definitions of “mother”:
   ——a woman who has given birth to a child
   ——a stringy slimy substance consisting of yeast cells and bacteria; is added to cider or wine to produce vinegar
   这是一个词义歧义的例子(This is an instance of word sense ambiguity)
   更多的词义歧义例子:
    ——They put money in the bank
     = buried in mud?
    ——I saw her duck with a telescope
  D、话语(多语)层次的歧义:
  ——Alice says they’ve built a computer that understands you like your mother
  ——But she ...
   ... doesn’t know any details
   ... doesn’t understand me at all
   This is an instance of anaphora, where she co-referees to some other discourse entity
三、NLP的知识瓶颈
我们需要:
 ——有关语言的知识;
 ——有关世界的知识;
可能的解决方案:
 ——符号方法or象征手法(Symbolic approach):将所有需要的信息在计算机里编码;
 ——统计方法(Statistical approach):从语言样本中推断语言特性;
1、例子研究:限定词位置(Determiner Placement)
任务:在文本中自动地放置限定词
Task: Automatically place determiners (a,the,null)in a text
样本:
Scientists in United States have found way of turning lazy monkeys into workaholics using gene therapy. Usually monkeys work hard only when they know reward is coming, but animals given this treatment did their best all time. Researchers at National Institute of Mental Health near Washington DC, led by Dr Barry Richmond, have now developed genetic treatment which changes their work ethic markedly. ”Monkeys under influence of treatment don’t procrastinate,” Dr Richmond says. Treatment consists of anti-sense DNA - mirror image of piece of one of our genes - and basically prevents that gene from working. But for rest of us, day when such treatments fall into hands of our bosses may be one we would prefer to put off.
2、 相关语法规则(Relevant Grammar Rules)
 a) 限定词位置很大程度上由以下几项决定(Determiner placement is largely determined by):
  i. 名词类型-可数,不可数(Type of noun - countable, uncountable);
  ii. 照应-特指,类指(Reference -specific, generic);
  iii. 信息价值-已有,新知(Information value - given, new)?这个翻译不确定^_^
  iv. 数词-单数,复数(Number - singular, plural)
 b) 然而,许多例外和特殊情况也扮演着一定的角色(However, many exceptions and special cases play a role),如:
  i. 定冠词用在报纸名称的前面,但是零冠词用在杂志和期刊名称前面
  ii. The definite article is used with newspaper titles (The Times), but zero article in names of magazines and journals (Time)
3、 符号方法方案(Symbolic Approach: Determiner Placement)
 a) 我们需要哪些类别的知识(What categories of knowledge do we need):
  i. 语言知识(Linguistic knowledge):
   -静态知识:数词,可数性,…(Static knowledge: number, countability, ...)
   -上下文相关知识:共指关系,…(Context-dependent knowledge: co-reference, ... )
  ii. 世界知识(World knowledge):
   -Uniqueness of reference (the current president of the US), type of noun (newspaper vs. magazine), situational associativity between nouns (the score of the football game), ...
  iii. 这些信息很难人工编码(Hard to manually encode this information)!
4、 统计方法方案(Statistical Approach: Determiner Placement)
 a) 朴素方法(Naive approach):
  i. 收集和你的领域相关的大量的文本(Collect a large collection of texts relevant to your domain (e.g., newspaper text))
  ii. 对于其中的每个名词,计算它和特定的限定词一起出现的概率,公式如下(For each noun, compute its probability to take a certain determiner):
   - p(determiner|noun)= freq(noun,deter miner)/freq(noun)
  iii. 对于一个新名词,依据训练语料库中最高似然估计选择一个限定词(Given a new noun, select a determiner with the highest likelihood as estimated on the training corpus)
 b) 实现(Implementation):
  i. 语料:训练——华尔街日报(WSJ)前21节语料,测试——第23节(Corpus: training — first 21 sections of the Wall Street Journal (WSJ) corpus, testing – the 23th section)
  ii. 预测准确率:71.5%(Prediction accuracy: 71.5%)
 c) 结论(Does it work?):
  i. 结果并不是很好,但是对于这样简单的方法结果还是令人吃惊(The results are not great, but surprisingly high for such a simple method)
  ii. 这个语料库中的很大一部分名词总是和同样的限定词一起出现(A large fraction of nouns in this corpus always appear with the same determiner),如:
   -“the FBI”,“the defendant”, ...
5、 作为分类问题的限定词位置(Determiner Placement as Classification)
 a) 预测(Prediction): “the”, “a”, “null”
 b) 代表性的问题(Representation of the problem):
  i. 复数?(是,否)(plural? (yes, no))
  ii. 第一次在文本中出现?(是否)(first appearance in text? (yes, no))
  iii. 名词(词汇集的成员)(noun (members of the vocabulary set))
 c) 图表例子略
 d) 目标:学习分类函数以预测未知例子(Goal: Learn classification function that can predict unseen examples)
6、 分类方法(Classification Approach)
 a) 学习X->Y的映射函数(Learn a function from X->Y (in the previous example, {?1,0,1})
 b) 假设已存在一些分布D(X,Y)(Assume there is some distribution D(X, Y ), where x ∈ X, and y ∈ Y )
 c) 尝试建立分布D(X,Y)和D(X|Y)的模型(Attempt to explicitly model the distribution D(X, Y ) and D(X|Y ))
7、 分类之外(Beyond Classification)
 a) 许多NLP应用领域可以被看作是从一个复杂的集合到另一个集合的映射(Many NLP applications can be viewed as a mapping from one complex set to another):
  i. 句法分析(Parsing): 串到树(strings to trees)
  ii. 机器翻译(Machine Translation): 串到串(strings to strings)
  iii. 自然语言生成(Natural Language Generation):数据词条到串(database entries to strings)
 b) 注意,分类框架并不适合这些情况!(Classification framework is not suitable in these cases!)
8、 机器翻译中的映射(Mapping in Machine Translation)
 a) Weaver 1955 的经典论述:
  i. “... one naturally wonders if the problem of translation could conceivably be treated as a problem of cryptography. When I look at an article in Russian, I say: ‘this is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.’ ”
 b) 机器翻译示例略
 c) 机器翻译中的学习(Learning for MT)
  i. 在许多语言对中都有合适的平行语料库(Parallel corpora are available in several language pairs)
  ii. 基本思想(Basic idea):使用平行语料库作为翻译例子的训练集(use a parallel corpus as a training set of translation examples)
  iii. 目标(Goal): 学习一个函数能将源语言的字符串映射为目标语言的字符串(learn a function that maps a string in a source language to a string in a target language)
四、 这门课程将包含哪些内容
1、 对不同层次(句法、语义、篇章)的语言知识建立合适的计算模型和有效的表达方式
2、 从文本样本中学习有关语言特性的算法:平滑估计,对数线性模型,概率上下文无关文法,EM算法,联合训练,.....
3、 以文本处理技术为基础的应用:机器翻译,文本摘要,信息检索
 
五、 教学大纲(Syllabus)
 简介和概述(Introduction and Overview)——1课时(1 class)
 简单的语言统计学(Simple Language Statistics)——1课时(1 class)
 语言模型(Language Models)——1课时(1 class)
 标注(Tagging)——1课时(1 class)
 句法分析(Syntactic Parsing)——1课时(1 class)
 无监督语法归纳(Unsupervised Grammar Induction )——1课时(1 class)
 词汇语义学介绍(Introduction to Lexical Semantics)——1课时(1 class)
 词义消歧(Word Sense Disambiguation)——1课时(1 class)
 语义分析(Semantic Parsing)——1课时(1 class)
 语篇处理介绍(Discourse Processing)——1课时(1 class)
 指代消解(Anaphora Resolution)——1课时(1 class)
 主题划分(Topical Segmentation)——1课时(1 class)
 语篇分析(Discourse Parsing)——1课时(1 class)
 对话处理(Dialogue Processing)——1课时(1 class)
 自然语言生成(Natural Language Generation)——1课时(1 class)
 文本摘要(Text Summarization)——1课时(1 class)
 信息检索(Information Retrieval)——1课时 (1 class)
 机器翻译(Machine Translation)——3课时 (3 classes)
六、 预备知识(Prerequisites)
1、 对语言感兴趣并了解英语的基本知识
2、 懂一些基本的线性代数,概率统计知识
3、 有基本的编程基础
七、 评价(Assessment)
1、 期中考试(Midterm)——35%
2、 两个家庭作业(Two homeworks)——每个15%
3、 一个提交工程(Project)—— 35%
 
八、 总结(Summary)
1、 统计方法 vs “手工制作”系统
 a) 许多规则都需要编码成人类知识
 b) 很难对规则间的交互建模
 c) 常见的限制比较弹性
2、 关于NLP的机器学习
 a) 我们需要对语言信息更有效的计算表示能力
 b) 我们需要对处理语言数据更合适的学习算法

自然语言处理第一讲:简介和概述

标签:学习分类   sts   工程   tween   represent   each   复杂   表达   home   

原文地址:http://www.cnblogs.com/sccdcwc/p/6445136.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!