码迷,mamicode.com
首页 > 其他好文 > 详细

损失函数(Loss Function) -1

时间:2014-11-08 19:39:59      阅读:331      评论:0      收藏:0      [点我收藏+]

标签:des   style   blog   http   io   color   ar   os   使用   

http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf

  1. Loss Function

    损失函数可以看做 误差部分(loss term) + 正则化部分(regularization term)

    bubuko.com,布布扣

1.1 Loss Term

  • Gold Standard (ideal case)
  • Hinge (SVM, soft margin)
  • Log (logistic regression, cross entropy error)
  • Squared loss (linear regression)
  • Exponential loss (Boosting)

? ?

Gold Standard 又被称为0-1 loss, 记录分类错误的次数

bubuko.com,布布扣

Hinge Loss http://en.wikipedia.org/wiki/Hinge_loss

For an intended output?t?= ±1?and a classifier score?y, the hinge loss of the prediction?y?is defined as

bubuko.com,布布扣

Note that?y?should be the "raw" output of the classifier‘s decision function, not the predicted class label. E.g., in linear SVMs,?

bubuko.com,布布扣

It can be seen that when?t?and?y?have the same sign (meaning?y?predicts the right class) and?

bubuko.com,布布扣

, the hinge loss?

bubuko.com,布布扣

, but when they have opposite sign,?

bubuko.com,布布扣

increases linearly with?y?(one-sided error).

? ?

来自 <http://en.wikipedia.org/wiki/Hinge_loss>

bubuko.com,布布扣

Plot of hinge loss (blue) vs. zero-one loss (misclassification, green:y?< 0) for?t?= 1?and variable?y. Note that the hinge loss penalizes predictions?y?< 1, corresponding to the notion of a margin in a support vector machine.

? ?

来自 <http://en.wikipedia.org/wiki/Hinge_loss>

? ?

bubuko.com,布布扣

? ?

在Pegasos: Primal Estimated sub-GrAdient SOlver for SVM论文中

bubuko.com,布布扣

这里把第一部分看成正规化部分,第二部分看成误差部分,注意对比ng关于svm的课件

不考虑规则化

bubuko.com,布布扣

考虑规则化

bubuko.com,布布扣

? ?

Log Loss

Ng的课件1,先是讲 linear regression 然后引出最小二乘误差,之后概率角度高斯分布解释最小误差。

然后讲逻辑回归,使用MLE来引出优化目标是使得所见到的训练数据出现概率最大

bubuko.com,布布扣

? ?

bubuko.com,布布扣

? ?

bubuko.com,布布扣

最大化下面的log似然函数

bubuko.com,布布扣

而这个恰恰就是最小化cross entropy

? ?

http://en.wikipedia.org/wiki/Cross_entropy

http://www.cnblogs.com/rocketfan/p/3350450.html 信息论,交叉熵与KL divergence关系

bubuko.com,布布扣

bubuko.com,布布扣

? ?

Cross entropy can be used to define loss function in machine learning and optimization. The true probability?

bubuko.com,布布扣

?is the true label, and the given distribution?

bubuko.com,布布扣

?is the predicted value of the current model.

More specifically, let us consider?logistic regression, which (in its most basic guise) deals with classifying a given set of data points into two possible classes generically labelled?

bubuko.com,布布扣

?and?

bubuko.com,布布扣

. The logistic regression model thus predicts an output?

bubuko.com,布布扣

, given an input vector?

bubuko.com,布布扣

. The probability is modeled using thelogistic function?

bubuko.com,布布扣

. Namely, the probability of finding the output?

bubuko.com,布布扣

?is given by

bubuko.com,布布扣

where the vector of weights?

bubuko.com,布布扣

?is learned through some appropriate algorithm such as?gradient descent. Similarly, the conjugate probability of finding the output?

bubuko.com,布布扣

?is simply given by

bubuko.com,布布扣

The true (observed) probabilities can be expressed similarly as?

bubuko.com,布布扣

?and?

bubuko.com,布布扣

.

? ?

Having set up our notation,?

bubuko.com,布布扣

?and?

bubuko.com,布布扣

, we can use cross entropy to get a measure for similarity between?

bubuko.com,布布扣

?and?

bubuko.com,布布扣

:

bubuko.com,布布扣

The typical loss function that one uses in logistic regression is computed by taking the average of all cross-entropies in the sample. For specifically, suppose we have?

bubuko.com,布布扣

?samples with each sample labeled by?

bubuko.com,布布扣

. The loss function is then given by:

bubuko.com,布布扣

where?

bubuko.com,布布扣

, with?

bubuko.com,布布扣

?the logistic function as before.

? ?

The logistic loss is sometimes called cross-entropy loss. It‘s also known as log loss (In this case, the binary label is often denoted by {-1,+1}).[1]

? ?

来自 <http://en.wikipedia.org/wiki/Cross_entropy>

? ?

? ?

因此和ng从MLE角度给出的结论是完全一致的! 差别是最外面的一个负号

也就是逻辑回归的优化目标函数是 交叉熵

bubuko.com,布布扣

? ?

squared loss

bubuko.com,布布扣

? ?

exponential loss

bubuko.com,布布扣

指数误差通常用在boosting中,指数误差始终> 0,但是确保越接近正确的结果误差越小,反之越大。

? ?

? ?

损失函数(Loss Function) -1

标签:des   style   blog   http   io   color   ar   os   使用   

原文地址:http://www.cnblogs.com/rocketfan/p/4083821.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!