码迷,mamicode.com
首页 > 其他好文 > 详细

Tensorflow细节-P89-collection的使用

时间:2019-10-03 13:02:50      阅读:69      评论:0      收藏:0      [点我收藏+]

标签:总结   val   size   name   细节   die   正则   constant   layer   

知识总结

(1)再次注意summary的使用
(2)x = rdm.rand(dataset_size, 2) y_ = [[x1**2 + x2**2] for (x1, x2) in x]这里的问题要注意
(3)注意batch时,全部先按照一套W进行前向传播,这时候在进行正则化时,加的是同一套W,然后反向传播改变W值,进行下一轮前向传播
代码如下

import tensorflow as tf
import numpy as np
from numpy.random import RandomState


rdm = RandomState(1)
dataset_size = 128
x = rdm.rand(dataset_size, 2)
y_ = [[x1**2 + x2**2] for (x1, x2) in x]


def get_weight(shape, alpha, name):
    with tf.variable_scope("get_variable" + name):
        var = tf.get_variable(name, shape, tf.float32, initializer=tf.truncated_normal_initializer(0.01))
        tf.add_to_collection("losses", tf.contrib.layers.l2_regularizer(alpha)(var))
        return var


with tf.name_scope("generate_value"):
    xs = tf.placeholder(tf.float32, [None, 2], name="x_input")
    ys = tf.placeholder(tf.float32, [None, 1], name="y_output")
batch_size = 8
layers_dimension = [2 ,10, 10, 10 ,1]
n_layers = len(layers_dimension)
in_dimension = layers_dimension[0]
cur_layer = xs

for i in range(1, n_layers):
    out_dimension = layers_dimension[i]
    with tf.variable_scope("layer%d" % i):
        weights = get_weight([in_dimension, out_dimension], 0.001, "layers")
        biases = tf.get_variable("biases", [out_dimension], tf.float32, tf.constant_initializer(0.0))
        cur_layer = tf.matmul(cur_layer, weights) + biases
        cur_layer = tf.nn.relu(cur_layer)
    in_dimension = layers_dimension[i]

with tf.name_scope("loss_op"):
    mse_loss = tf.reduce_mean(tf.square(ys - cur_layer))
    tf.add_to_collection("losses", mse_loss)
    loss = tf.add_n(tf.get_collection("losses"))
    tf.summary.scalar("loss", loss)

train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

merged = tf.summary.merge_all()
init = tf.global_variables_initializer()
with tf.Session() as sess:
    writer = tf.summary.FileWriter("path/", tf.get_default_graph())
    sess.run(init)
    for i in range(5000):
        start = i*batch_size % dataset_size
        end = min(start+batch_size, dataset_size)
        if i % 50 == 0:
            result = sess.run(merged, feed_dict={xs: x, ys: y_})
            writer.add_summary(result, global_step=i)
        if i % 500 ==0:
            loss_op = sess.run(loss, feed_dict={xs: x, ys: y_})
            print("After %d training, loss is %g" % (i, loss_op))
        _ = sess.run(train_op, feed_dict={xs: x[start:end], ys: y_[start:end]})
writer.close()

技术图片
技术图片

Tensorflow细节-P89-collection的使用

标签:总结   val   size   name   细节   die   正则   constant   layer   

原文地址:https://www.cnblogs.com/liuboblog/p/11619428.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!