标签:through string var custom false get 机制 glob exclusive
tensorflow中生成variable有两个函数:tf.Variable和tf.get_variable。
tf.Variable定义如下
class Variable(object)
def __init__(self, initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None)
Creates a new variable with value initial_value.
The new variable is added to the graph collections listed in collections, which defaults to [GraphKeys.GLOBAL_VARIABLES].
If trainable is True the variable is also added to the graph collection GraphKeys.TRAINABLE_VARIABLES.
This constructor creates both a variable Op and an assign Op to set the variable to its initial value.
initial_value:
A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.)
trainable:
If True, the default, also adds the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. This collection is used as the default list of variables to use by the Optimizer classes.
collections:
List of graph collections keys. The new variable is added to these collections. Defaults to [GraphKeys.GLOBAL_VARIABLES].
validate_shape:
If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known.
caching_device:
Optional device string describing where the Variable should be cached for reading. Defaults to the Variable‘s device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements.
name:
Optional name for the variable. Defaults to ‘Variable‘ and gets uniquified automatically.
variable_def:
VariableDef protocol buffer. If not None, recreates the Variable object with its contents. variable_def and the other arguments are mutually exclusive.
dtype:
If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide.
expected_shape:
A TensorShape. If set, initial_value is expected to have this shape.
import_scope:
Optional string. Name scope to add to the Variable. Only used when initializing from protocol buffer.
raises:
ValueError – If both variable_def and initial_value are specified.
ValueError – If the initial value is not specified, or does not have a shape and validate_shape is True.
tf.get_variable定义如下
def get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None)
tf.Variable函数会返回一个variable,如果给出的name已经存在,会自动修改name,生成个新的。
tf.get_variable函数拥有一个变量检查机制,会检测已经存在的变量是否设置为共享变量,如果已经存在的变量没有设置为共享变量,TensorFlow 运行到第二个拥有相同名字的变量的时候,就会报错。
tf.get_variable一般和tf.variable_scope配合使用,用于在同一个的变量域中共享同一个变量,或在不同变量域中使用同名变量。
测试代码
import tensorflow as tf
w1 = tf.Variable(1, name=‘w1‘)
w2 = tf.Variable(2, name=‘w1‘)
print(‘w1.name=‘,w1.name)
print(‘w2.name=‘,w2.name)
# w3 = tf.get_variable(name=‘w3‘,initializer=1)
# w4 = tf.get_variable(name=‘w3‘,initializer=2)
w3 = tf.get_variable(name=‘w3‘,initializer=2)
print(‘w3.name=‘,w3.name)
with tf.variable_scope(‘scope1‘) as scope1:
w1scope1 = tf.Variable(1, name=‘w1‘)
print(‘w1scope1.name=‘, w1scope1.name)
w4 = tf.get_variable(name=‘w4‘, initializer=1.0)
print(‘w4.name=‘, w4.name)
# scope1.reuse_variables() 或 tf.get_variable_scope().reuse_variables()
tf.get_variable_scope().reuse_variables()
w5 = tf.get_variable(name=‘w4‘, initializer=2.0)
print(‘w5.name=‘, w5.name)
# 报错,生成失败
# w8 = tf.get_variable(name=‘w8‘, initializer=1.0)
print(‘w1.name=‘, w1.name)
with tf.variable_scope(‘scope2‘, reuse=None) as scope2:
w6 = tf.get_variable(name=‘w6‘, initializer=1.0)
print(‘w6.name=‘, w6.name)
w1 = tf.Variable(1, name=‘w1‘)
print(‘w1.name=‘, w1.name)
print(‘w1.name=‘, w1.name)
with tf.variable_scope(‘scope2‘, reuse=True):
w7 = tf.get_variable(name=‘w6‘, initializer=2.0)
print(‘w7.name=‘, w7.name)
结果
w1.name= w1:0
w2.name= w1_1:0
w3.name= w3:0
w1scope1.name= scope1/w1:0
w4.name= scope1/w4:0
w5.name= scope1/w4:0
w1.name= w1:0
w6.name= scope2/w6:0
w1.name= scope2/w1:0
w1.name= scope2/w1:0
w7.name= scope2/w6:0
测试结果可以看出
1、tf.Variable如果有重复的name,那么新生成的variable名字自动加后缀_1 _2
2、在variable_scope内使用同一个name的variable和全局的同name或其他variable_scope的同name完全是不同的variable。
3、在一个variable_scope下申请variable必须设置variable_scope的reuse为None。
4、同一个variable_scope下想重用variable需要执行 域名字.reuse_variables() 或 tf.get_variable_scope().reuse_variables() 而且之后不能再申请除了重用外的变量。
5、想重用某个variable_scope下的variable,重设reuse=True就可以了。
标签:through string var custom false get 机制 glob exclusive
原文地址:http://www.cnblogs.com/qggg/p/6858325.html