码迷,mamicode.com
首页 > Web开发 > 详细

RNN(Recurrent Neural Network)的几个难点

时间:2015-06-02 17:30:41      阅读:227      评论:0      收藏:0      [点我收藏+]

标签:

1. vanish of gradient

RNN的error相对于某个时间点t的梯度为:

\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\), 

其中\(h\)是hidden node的输出,\(y_t\)是网络在t时刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\),导数在时间段[k,t]上的链式展开,这段时间可能很长,会造成vanish或者explosion gradiant。将\(\frac{\partial h_t}{\partial h_k}\)沿时间展开:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩阵是个什么鬼?我来举个例子,你就明白了。假设现在要求解\(\frac{\partial h_5}{\partial h_4}\),回忆向前传播时\(h_5\)是怎么得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),则\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量,所以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩阵也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}} \\ \vdots&\cdots&\vdots \\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明显的,非对角线上的值都是0。这是因为sigmoid logistic function \(\sigma\)是element-wise的操作。

后面推导vanish或者explosion gradiant的过程就很简单了,我就不写了,请参考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往后部分。

2. sum derivatives of nodes

未完待续。。。 

RNN(Recurrent Neural Network)的几个难点

标签:

原文地址:http://www.cnblogs.com/congliu/p/4546634.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!