码迷,mamicode.com
首页 > 其他好文 > 详细

caffe 中base_lr、weight_decay、lr_mult、decay_mult代表什么意思?

时间:2016-05-29 23:11:54      阅读:2902      评论:0      收藏:0      [点我收藏+]

标签:

机器学习或者模式识别中,会出现overfitting,而当网络逐渐overfitting时网络权值逐渐变大,因此,为了避免出现overfitting,会给误差函数添加一个惩罚项,常用的惩罚项是所有权重的平方乘以一个衰减常量之和。其用来惩罚大的权值。

The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled.

So let‘s say that we have a cost or error function E(w) that we want to minimize. Gradient descent tells us to modify the weights w in the direction of steepest descent in E:

wiwiηEwi,

where η is the learning rate, and if it‘s large you will have a correspondingly large modification of the weights wi(in general it shouldn‘t be too large, otherwise you‘ll overshoot the local minimum in your cost function).

In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to E˜(w)=E(w)+λ2w2. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter λ determines how you trade off the original cost E with the large weights penalization.

Applying gradient descent to this new cost function we obtain:

wiwiηEwiηλwi.

The new term ηλwi coming from the regularization causes the weight to decay in proportion to its size.

In your solver you likely have a learning rate set as well as weight decay.  lr_mult indicates what to multiply the learning rate by for a particular layer.  This is useful if you want to update some layers with a smaller learning rate (e.g. when finetuning some layers while training others from scratch) or if you do not want to update the weights for one layer (perhaps you keep all the conv layers the same and just retrain fully connected layers).  decay_mult is the same, just for weight decay.

 

参考资料:http://stats.stackexchange.com/questions/29130/difference-between-neural-net-weight-decay-and-learning-rate

     http://blog.csdn.net/u010025211/article/details/50055815

     https://groups.google.com/forum/#!topic/caffe-users/8J_J8tc1ZHc

caffe 中base_lr、weight_decay、lr_mult、decay_mult代表什么意思?

标签:

原文地址:http://www.cnblogs.com/malf-14/p/5540514.html

(1)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!