码迷,mamicode.com
首页 > 其他好文 > 详细

[Tensorflow]激励函数tf.nn.relu样例

时间:2019-06-16 19:55:45      阅读:122      评论:0      收藏:0      [点我收藏+]

标签:ports   cat   on()   定义   tde   and   hat   guard   创建   

 

代码:
import tensorflow as tf
import numpy as np
### 定义添加神经网络层函数 START ###

def add_layer(inputs,in_size,out_size,activation_function=None): 
    """描述:  添加神经网络层函数.
    :param inputs:  输入神经层
    :param in_size: 输入神经层的神经元个数
    :param out_size:    输出神经层的神经元个数
    :param activation_function: 激励函数
    """
    
    # 定义一个"in_size行,out_size列"的随机矩阵变量
    Weights=tf.Variable(tf.random_normal([in_size,out_size]))

    # 定义一个"1行,out_size列"的0值矩阵基准变量
    biases=tf.Variable(tf.zeros([1,out_size])+0.1)

    # 定义一个矩阵乘法函数公式
    Wx_plus_b = tf.matmul(inputs,Weights)+biases

    # 判断是否使用激励函数
    if activation_function is None:
        outputs=Wx_plus_b
    else:
        outputs=activation_function(Wx_plus_b)
    return outputs
### 定义添加神经网络层函数 END ###


### 定义变量结构 START###

# 定义起始输入:在指定的-1到1的间隔内返回300个均匀间隔的1行300列的数组,再将数组转化为1列300行的矩阵
#   例如:
#   x1 = np.array([1, 2, 3, 4, 5])
#   #   the shape of x1 is (5,)
#   x1_new = x1[:, np.newaxis]
#   #   now, the shape of x1_new is (5, 1)
#   array([[1],
#          [2],
#          [3],
#          [4],
#          [5]])
#   x1_new = x1[np.newaxis,:]
#   #   now, the shape of x1_new is (1, 5)
#   array([[1, 2, 3, 4, 5]])

x_data=np.linspace(-1,1,300)[:,np.newaxis]

# 定义噪点 :使用高斯分布的概率密度函数定义一个均值为0,标准差为0.05的高斯随机数,个数为x_data的矩阵元素数
noise =np.random.normal(0,0.05,x_data.shape)

# 定义起始输出:x_data的平方减去0.5,再加上噪点
y_data=np.square(x_data)-0.5+noise


# 定义运行时参数变量
xs=tf.placeholder(tf.float32,[None,1])
ys=tf.placeholder(tf.float32,[None,1])

### 定义神经网络结构    START###

# 定义隐藏层神经网络层layer01
layer01=add_layer(xs,1,10,activation_function=tf.nn.relu)
# 定义隐藏层神经网络层layer02
layer02=add_layer(layer01,10,10,activation_function=tf.nn.sigmoid)
# 定义预测输出层 prediction
prediction =add_layer(layer02,10,1,activation_function=None)
# 计算损失
# 1.计算起始输出与预测输出的偏差的平方
loss_square=tf.square(y_data - prediction)
# 2.计算一个张量的各个维度上元素的总和.
reduce_sum_square=tf.reduce_sum(loss_square,reduction_indices=[1])
# 3.计算损失:张量的各个维度上的元素的平均值
loss=tf.reduce_mean(reduce_sum_square)

#使用梯度下降算法训练所有样本
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
# 定义初始化变量
init=tf.initialize_all_variables()
# 创建会话
sess=tf.Session()
# 运行初始化变量指针
sess.run(init)

### 定义神经网络结构    END###

###定义变量结构  END###

for i in range(2000):
    sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
    if i%50==0:
        print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))

 

输出结果:
> Executing task: python d:\Work\002_WorkSpace\VSCode\Tensorflow\cnn.py <

WARNING:tensorflow:From C:\Program Files\Python\Python37\lib\site-packages\tensorflow\python\framework
\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Program Files\Python\Python37\lib\site-packages\tensorflow\python\ops
\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Program Files\Python\Python37\lib\site-packages\tensorflow\python\util
\tf_should_use.py:193: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use `tf.global_variables_initializer` instead. 2019-06-16 18:23:25.445771: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
0.9150444 0.018474927 0.012227052 0.008430008 0.006330067 0.005174632 0.0045147026 0.004099458 0.0037936615 0.0035521714 0.0033668855 0.003235288 0.0031796119 0.003798308 0.011472862 0.011122204 0.0038715526 0.0029777498 0.00284954 0.0028072707 0.0027813027 0.0027617016 0.0027467846 0.0027342557 0.0027231644 0.0027126905 0.0027037202 0.0026956936 0.0026887206 0.0026827992 0.0026773391 0.0026706234 0.0026643125 0.0026575066 0.0026512532 0.00264405 0.0026368005 0.0026302505 0.0026243015 0.0026188325 Terminal will be reused by tasks, press any key to close it.

 

[Tensorflow]激励函数tf.nn.relu样例

标签:ports   cat   on()   定义   tde   and   hat   guard   创建   

原文地址:https://www.cnblogs.com/Areas/p/11032577.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!