码迷,mamicode.com
首页 > 其他好文 > 详细

TensorFlow官方文档入门笔记[一]

时间:2017-09-22 00:55:38      阅读:188      评论:0      收藏:0      [点我收藏+]

标签:必须   入门   官方   oss   ini   on()   top   mode   mini   

TensorFlow官方文档入门笔记[一]

张量

3 # a rank 0 tensor; this is a scalar with shape []
[1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

 

 

计算图

TF由两个部分组成:构建计算图运行计算图

node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)

输出:

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

必须创建一个Session对象,然后调用其run方法运行计算图。会话封装了TensorFlow运行时的控制和状态。

sess = tf.Session()
print(sess.run([node1, node2]))

输出:

[3.0, 4.0]

 

node3 = tf.add(node1, node2)
print("node3:", node3)
print("sess.run(node3):", sess.run(node3))

输出:

node3: Tensor("Add:0", shape=(), dtype=float32)
sess.run(node3): 7.0

 

a = tf.placeholder(tf.float32)

b = tf.placeholder(tf.float32)

adder_node = a + b  # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b: 4.5}))

print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
输出
7.5
[ 3.  7.]

 

add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
输出
22.5

 

变量允许我们向图中添加可训练的参数。它们的构造类型和初始值:

W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b

常数在调用时被初始化tf.constant,其值永远不会改变。相比之下,调用时,变量不会被初始化tf.Variable。要初始化TensorFlow程序中的所有变量:

init = tf.global_variables_initializer()
sess.run(init)

x是占位符评估linear_model几个值

print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
输出
[ 0.          0.30000001  0.60000002  0.90000004]

评估训练模型创建损失函数

y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
输出
23.66

tf.assign参数值

fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])

print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))

   输出

0.0

 

tf.train API

TensorFlow提供了优化器,可以缓慢地更改每个变量,以便最大限度地减少损失函数。最简单的优化器是梯度下降。它根据相对于该变量的损失导数的大小修改每个变量tf.gradients

例如:

optimizer = tf.train.GradientDescentOptimizer(0.01)

train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})

print(sess.run([W, b]))

   输出

[array([-0.9999969], dtype=float32), array([ 0.99999082],
dtype=float32)]

 

完成程序

完成的可训练线性回归模型如下:

import tensorflow as tf

# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)

# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})

# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
输出
W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11

TensorFlow官方文档入门笔记[一]

标签:必须   入门   官方   oss   ini   on()   top   mode   mini   

原文地址:http://www.cnblogs.com/yinghuali/p/7571788.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!