码迷,mamicode.com
首页 > 其他好文 > 详细

Deep Learning by Andrew Ng --- DNN

时间:2015-04-09 09:00:30      阅读:172      评论:0      收藏:0      [点我收藏+]

标签:training   andrew   

When should we use fine-tuning?

It is typically used only if you have a large labeled training set; in this setting, fine-tuning can significantly improve the performance of your classifier. However, if you have a large unlabeled dataset (for unsupervised feature learning/pre-training) and only a relatively small labeled training set, then fine-tuning is significantly less likely to help.

Stacked Autoencoders(Training):

相当于用多个autoencoder去捕获输入集的特征。第一个autoencoder捕获了数据集的特征后,得到特征matrix1(hidden layer的权重).然后将特征matrix1与输入集feedForward处理后的activation作为输入去捕获更高等级的特征matrix2(hidden layer的权重).然后不断重复,再讲最后得到的特征activation作为输入集输入到softmax classifier(或者其他分类器)中训练。(注意并非将训练完后得到的特征matrix直接传给下一个autoencoder,而是将输入集与此输入集同级的特征matrix用feedForward方法得到的activation传入下一个autoencoder,即将输出传给下一个autoencode)。
技术分享
技术分享
技术分享
然后整个网络训练完之后,将各个步骤得到的特征matrix与分类器的参数合成新的网络。
技术分享

fine-tuning:

(未完待续)

Deep Learning by Andrew Ng --- DNN

标签:training   andrew   

原文地址:http://blog.csdn.net/meanme/article/details/44945661

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!