码迷,mamicode.com
首页 > Web开发 > 详细

【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

时间:2015-01-12 12:57:05      阅读:260      评论:0      收藏:0      [点我收藏+]

标签:

Exercise:Learning color features with Sparse Autoencoders

习题链接:Exercise:Learning color features with Sparse Autoencoders

 

sparseAutoencoderLinearCost.m

function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
    lambda, sparsityParam, beta, data)

% visibleSize: the number of input units (probably 64)
% hiddenSize: the number of hidden units (probably 25)
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
%                           notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data.  So, data(:,i) is the i-th training example.

% The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes.

% W1(i,j) denotes the weight from j_th node in input layer to i_th node
% in hidden layer. Thus it is a hiddenSize*visibleSize matrix
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2(i,j) denotes the weight from j_th node in hidden layer to i_th node
% in output layer. Thus it is a visibleSize*hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1(i) denotes the i_th bias in input layer to i_th node in hidden layer.
% Thus it is a hiddenSize*1 vector
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
% b2(i) denotes the i_th bias in hidden layer to i_th node in output layer.
% Thus it is a visibleSize*1 vector
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
%                and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc.  Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1.  I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j).  Thus, W1grad should be equal to the term
% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
%

% 1. Set \Delta W^{(1)}, \Delta b^{(1)} to 0 for all layer l
% Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2));

m = size(data,2);
% For small data, save activation information during computing rho
% 2a. Use backpropagation to compute diff(J_sparse(W,b;x,y), W^{(1)})
% and diff(J_sparse(W,b;x,y), b^{(1)})

% 2a.1. Perform a feedforward pass, computing the activations for
% hidden layer and output layer.

% z2 is a hiddenSize*m matrix
z2 = W1*data + repmat(b1,1,m);
% a2 is a hiddenSize*m matrix
a2 = sigmoid(z2);
% z3 is a visibleSize*m matrix
z3 = W2*a2 + repmat(b2,1,m);
% a3 is a visibleSize*m matrix
a3 = z3;
% rho is a hiddenSize*1 vector
rho = sum(a2,2);
rho = rho ./ m;

% KLterm is a hiddenSize*1 vector
KLterm = beta*(-sparsityParam ./ rho + (1-sparsityParam) ./ (1-rho));

% Accumulate the cost
cost = 1/2 * sum(sum((data-a3).*(data-a3)));

% 2a.2. For the output layer, set delta3
% delta3 is a visibleSize*m matrix
delta3 = -(data-a3);

% 2a.3. For the hidden layer, set delta2
% delta2 is a hiddenSize*m matrix
delta2 = (W2*delta3 + repmat(KLterm,1,m)) .* sigmoidDiff(z2);

% 2a.4. Compute the desired partial derivatives

% JW1diff is a hiddenSize*visibleSize matrix
JW1diff = delta2 * data;
% Jb1diff is a hiddenSize*m matrix
Jb1diff = delta2;
% JW2diff is a visibleSize*hiddenSize matrix
JW2diff = delta3 * a2;
% Jb1diff is a visibleSize*m matrix
Jb2diff = delta3;

% 2b. Update \Delta W^{(1)}
W1grad = W1grad + JW1diff;
W2grad = W2grad + JW2diff;

% 2c. Update \Delta b^{(1)}
b1grad = b1grad + sum(Jb1diff,2);
b2grad = b2grad + sum(Jb2diff,2);

% Compute KL penalty term
KLpen = beta * sum(sparsityParam*log(sparsityParam ./ rho) + (1-sparsityParam)*log((1-sparsityParam) ./ (1-rho)));

% Compute weight decay term
tempW1 = W1 .* W1;
tempW2 = W2 .* W2;
WD = (lambda/2)*(sum(sum(tempW1))+sum(sum(tempW2)));

cost = cost ./ m + WD + KLpen;
W1grad = W1grad ./ m + lambda .* W1;
W2grad = W2grad ./ m + lambda .* W2;
b1grad = b1grad ./ m;
b2grad = b2grad ./ m;

%-------------------------------------------------------------------
% 3.Update the parametersAfter computing the cost and gradient, we will
% convert the gradients back to a vector format (suitable for minFunc).
% Specifically, we will unroll your gradient matrices into a vector.

grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];

end

%-------------------------------------------------------------------
% Heres an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients.  This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)).

function sigm = sigmoid(x)
sigm = 1 ./ (1 + exp(-x));
end

% define the differential of sigmoid
function sigmDiff = sigmoidDiff(x)
sigmDiff = sigmoid(x) .* (1-sigmoid(x));
end

 

结果:

技术分享

 

如果跑出来是这样的,可能是把a3 = z3写成了a3 = sigmoid(z3)

技术分享

【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

标签:

原文地址:http://www.cnblogs.com/ganganloveu/p/4218111.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!