标签:step number get 部分 with neu image vol cell
1、Introduction
DL解决VO问题:End-to-End VO with RCNN
2、Network structure
a.CNN based Feature Extraction
论文使用KITTI数据集。
CNN部分有9个卷积层,除了Conv6,其他的卷积层后都连接1层ReLU,则共有17层。
b、RNN based Sequential Modelling
RNN is different from CNN in that it maintains memory of its hidden states over time and has feedback loops among them, which enables its current hidden state to be a function of the previous ones.
Given a convolutional feature xk at time k, a RNN updates at time step k by
hk and yk are the hidden state and output at time k respectively.
W terms denote corresponding weight matrices.
b terms denote bias vectors.
H is an element-wise nonlinear activation function.
LSTM
Folded and unfolded LSTMs and internal structure of its unit.
is element-wise product of two vectors.
σ is sigmoid non-linearity.
tanh is hyperbolic tangent non-linearity.
W terms denote corresponding weight matrices.
b terms denote bias vectors.
ik, f k, gk, ck and ok are input gate, forget gate, input modulation gate, memory cell and output gate.
Each of the LSTM layers has 1000 hidden states.
3、损失函数及优化
The conditional probability of the poses Yt = (y1, . . . , yt) given a sequence of monocular RGB images Xt = (x1, . . . , xt) up to time t.
Optimal parameters :
The hyperparameters of the DNNs:
(pk, φk) is the ground truth pose.
(p?k, φ?k) is the estimated ground truth pose.
κ (100 in the experiments) is a scale factor to balance the weights of positions and orientations.
N is the number of samples.
The orientation φ is represented by Euler angles rather than quaternion since quaternion is subject to an extra unit constraint which hinders the optimisation problem of DL.
DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks
标签:step number get 部分 with neu image vol cell
原文地址:https://www.cnblogs.com/zhuzhudong/p/9591349.html