标签:div output ras log pts average sim func cat
1. How tensorflow works
In order to run Tensorflow, we need two steps:
First, construct a Graph
Second, compute along the Graph in a Session.
For the second step, we can categorize it to train stage and execution stage.
2. Basic Tensorflow concepts
Tensor is just vector or matrix.
import tensorflow as tf a = tf.zero(shape=[2,2]) # we get a tensor that is 2x2
Variable is similar to variables that we use in other programming languages. However, most of the time, it refers to the training parameters, such as Weight and Bias in Tensorflow.
W = tf.Variable(tf.zeros(shape=[1,2]))
B = tf.Variable(tf.zeros(shape=[1,2]))
The variables should be initialized by hand in a Session, for example:
init = tf.initialize_all_variables()
with tf.Session() as session:
session.run(init)
Placeholder
We use placeholder to declare the shape a input tensors, but we can not only suply meaningful value during execution stage.
Train_x = tf.placeholder(tf.float32,[None, n_input, tensor_size],name=‘input‘) Train_y = tf.placeholder(tf.float32,[None, n_classes],name=‘input‘)
Session
Session is used by Graph to conduct real computation. We can use Session to train, predict models and fetch Variables from the models and check their values.
3. Create a Model
# build the Graph
x = tf.placeholder(tf.float32, [None, 784]) # placeholder for input: x
y = tf.placeholder(tf.float32, [None, 10]) # placeholder for input: y
W = tf.Variable(tf.zeros([784, 10])) # weight
b = tf.Variable(tf.zeros([10])) # bias
a = tf.nn.softmax(tf.matmul(x, W) + b) # output of the model
#define loss function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(0.5) # learning rate=0.5
train = optimizer.minimize(cross_entropy) # mininize loss
#check accuracy
correct_prediction = tf.equal(tf.argmax(a, 1), tf.argmax(y, 1)) # compare real target and prediction result
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # caculate the average accuracy
4. Complete code Example
"""A very simple MNIST classifier. See extensive documentation at http://tensorflow.org/tutorials/mnist/beginners/index.md """ from __future__ import absolute_import from __future__ import division from __future__ import print_function # Import data from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_string(‘data_dir‘, ‘/tmp/data/‘, ‘Directory for storing data‘) # 把数据放在/tmp/data文件夹中 mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # 读取数据集 # 建立抽象模型 x = tf.placeholder(tf.float32, [None, 784]) # 占位符 y = tf.placeholder(tf.float32, [None, 10]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) a = tf.nn.softmax(tf.matmul(x, W) + b) # 定义损失函数和训练方法 cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a), reduction_indices=[1])) # 损失函数为交叉熵 optimizer = tf.train.GradientDescentOptimizer(0.5) # 梯度下降法,学习速率为0.5 train = optimizer.minimize(cross_entropy) # 训练目标:最小化损失函数 # Test trained model correct_prediction = tf.equal(tf.argmax(a, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train sess = tf.InteractiveSession() # 建立交互式会话 tf.initialize_all_variables().run() for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) train.run({x: batch_xs, y: batch_ys}) print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
标签:div output ras log pts average sim func cat
原文地址:https://www.cnblogs.com/liutianyi10/p/9162095.html