码迷,mamicode.com
首页 > 其他好文 > 详细

TensorFlow NormLization

时间:2018-05-05 11:13:05      阅读:165      评论:0      收藏:0      [点我收藏+]

标签:factor   ali   auto   Dimension   过程   利用   channels   img   例子   

local_response_normalization

local_response_normalization出现在论文”ImageNet Classification with deep Convolutional Neural Networks”中,论文中说,这种normalization对于泛化是有好处的. 

技术分享图片


经过了一个conv2d或pooling后,我们获得了[batch_size, height, width, channels]这样一个tensor.现在,将channels称之为层,不考虑batch_size

技术分享图片

 1 tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)
 2 ‘‘‘
 3 Local Response Normalization.
 4 The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,
 5 ‘‘‘
 6 """
 7 input: A Tensor. Must be one of the following types: float32, half. 4-D.
 8 depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.
 9 bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).
10 alpha: An optional float. Defaults to 1. A scale factor, usually positive.
11 beta: An optional float. Defaults to 0.5. An exponent.
12 name: A name for the operation (optional).
13 """

技术分享图片

举例子:

  

 1 import tensorflow as tf  
 2   
 3 a = tf.constant([  
 4     [[1.0, 2.0, 3.0, 4.0],  
 5      [5.0, 6.0, 7.0, 8.0],  
 6      [8.0, 7.0, 6.0, 5.0],  
 7      [4.0, 3.0, 2.0, 1.0]],  
 8     [[4.0, 3.0, 2.0, 1.0],  
 9      [8.0, 7.0, 6.0, 5.0],  
10      [1.0, 2.0, 3.0, 4.0],  
11      [5.0, 6.0, 7.0, 8.0]]  
12 ])  
13 #reshape a,get the feature map [batch:1 height:2 width:2 channels:8]  
14 a = tf.reshape(a, [1, 2, 2, 8])  
15   
16 normal_a=tf.nn.local_response_normalization(a,2,0,1,1)  
17 with tf.Session() as sess:  
18     print("feature map:")  
19     image = sess.run(a)  
20     print (image)  
21     print("normalized feature map:")  
22     normal = sess.run(normal_a)  
23     print (normal)  
feature map:  
[[[[ 1.  2.  3.  4.  5.  6.  7.  8.]  
   [ 8.  7.  6.  5.  4.  3.  2.  1.]]  
  
  [[ 4.  3.  2.  1.  8.  7.  6.  5.]  
   [ 1.  2.  3.  4.  5.  6.  7.  8.]]]]  
normalized feature map:  
[[[[ 0.07142857  0.06666667  0.05454545  0.04444445  0.03703704  0.03157895  
     0.04022989  0.05369128]  
   [ 0.05369128  0.04022989  0.03157895  0.03703704  0.04444445  0.05454545  
     0.06666667  0.07142857]]  
  
  [[ 0.13793103  0.10000001  0.0212766   0.00787402  0.05194805  0.04  
     0.03448276  0.04545454]  
   [ 0.07142857  0.06666667  0.05454545  0.04444445  0.03703704  0.03157895  
     0.04022989  0.05369128]]]]  

  这里我取了n/2=2,k=0,α=1,β=1,举个例子,比如对于一通道的第一个像素“1”来说,我们把参数代人公式就是1/(1^2+2^2+3^2)=0.07142857,对于四通道的第一个像素“4”来说,公式就是4/(2^2+3^2+4^2+5^2+6^2)=0.04444445,以此类推

  注意:这里的feature_map为【1,2,2,8】,其中1代表图像的数量,2X2代表图像的长宽,8代表图像的层数(map),NRL主要是利用map去计算,然后计算的值为图像的长宽(像素),与图像的数量无关!

        我们可以这么理解,feature_map分割为直观的图像,第一个通道[1,8,4,1],第二个通道[2,7,3,2],第三个通道[3,6,2,3],以此类推。。。

      那么求解的过程和上面就一一对应了,其中在边角达不到n的时候,那就省略。

  能感觉到这种方法不好吗?效果肯定有的,因为对像素归一化了,有利于计算。但是对于一整幅图像来说反而没有什么太大的作用,因为归一化的种类不同,造成部分特征体现不出来,有时候反而不好。

 

 

 

 

 

 

 

 

 

 

参考:

  https://blog.csdn.net/mao_xiao_feng/article/details/53488271

  https://www.jianshu.com/p/c06aea337d5d

  https://blog.csdn.net/u012436149/article/details/52985303

TensorFlow NormLization

标签:factor   ali   auto   Dimension   过程   利用   channels   img   例子   

原文地址:https://www.cnblogs.com/wjy-lulu/p/8993897.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!