It is typically used only if you have a large labeled training set; in this setting, fine-tuning can significantly improve the performance of your classifier. However, if you have a large unlabeled dataset (for unsupervised feature learning/pre-training) and only a relatively small labeled training set, then fine-tuning is significantly less likely to help.
相当于用多个autoencoder去捕获输入集的特征。第一个autoencoder捕获了数据集的特征后,得到特征matrix1(hidden layer的权重).然后将特征matrix1与输入集feedForward处理后的activation作为输入去捕获更高等级的特征matrix2(hidden layer的权重).然后不断重复,再讲最后得到的特征activation作为输入集输入到softmax classifier(或者其他分类器)中训练。(注意并非将训练完后得到的特征matrix直接传给下一个autoencoder,而是将输入集与此输入集同级的特征matrix用feedForward方法得到的activation传入下一个autoencoder,即将输出传给下一个autoencode)。
然后整个网络训练完之后,将各个步骤得到的特征matrix与分类器的参数合成新的网络。
(未完待续)
Deep Learning by Andrew Ng --- DNN
原文地址:http://blog.csdn.net/meanme/article/details/44945661