标签:style log singleton inpu int 压缩 logs span 元素
use of unsqueeze():
1 module = nn.Unsqueeze(pos [, numInputDims])
1 input = torch.Tensor(2,4,3) # input: 2*4*3 2 print(input.unsqueeze(0).size()) # prints-torch.size([1,2,3,4])
use of view():
1 input = torch.Tensor(2,4,3) # input: 2*4*3 2 print(input.view(1,-1,-1,-1).size()) # print-torch.size([1,2,4,3])
squeeze 若指定维度,则把对应维度压缩,否则压缩所有维度为1的维度。
eg. B = squeeze(A),B与A有相同的元素,但所有只有一行或一列的维度(singleton dimension)被去除掉了。
1 module = nn.Squeeze([dim, numInputDims]) 2 x=torch.rand(2,1,2,1,2) 3 > x 4 (1,1,1,.,.) = 5 0.6020 0.8897 6 (2,1,1,.,.) = 7 0.4713 0.2645 8 (1,1,2,.,.) = 9 0.4441 0.9792 10 (2,1,2,.,.) = 11 0.5467 0.8648 12 13 > torch.squeeze(x,2) 14 (1,1,.,.) = 15 0.6020 0.8897 16 (2,1,.,.) = 17 0.4713 0.2645 18 (1,2,.,.) = 19 0.4441 0.9792 20 (2,2,.,.) = 21 0.5467 0.8648 22 [torch.DoubleTensor of dimension 2x2x1x2]
dropout
未完待续
标签:style log singleton inpu int 压缩 logs span 元素
原文地址:http://www.cnblogs.com/Joyce-song94/p/7002984.html