标签:color ESS clip scala .com present 梯度 cli and
在tensorflow中通常使用下述方法对模型进行训练
# 定义Optimizer opt = tf.train.AdamOptimizer(lr) # 定义train train = opt.minimize(loss) for i in range(100): sess.run(train)
train指向的是tf.Graph中关于训练的节点,其中opt.minimize(loss)相当不直观,它相当于
# Compute the gradients for a list of variables. grads_and_vars = opt.compute_gradients(loss, <list of variables>) # grads_and_vars is a list of tuples (gradient, variable). # Ask the optimizer to apply the gradients. opt.apply_gradients(grads_and_vars)
即建立了求梯度的节点和optimizer根据梯度对变量进行修改的节点
因此,可以通过下述方法对梯度进行修改
grads_and_vars = opt.compute_gradients(loss, <list of variables>) capped_grads_and_vars = [(MyCapper(grad), var) for grad, var in grads_and_vars] opt.apply_gradients(capped_grads_and_vars)
举两个例子
# tf.clip_by_value( # t, # clip_value_min, # clip_value_max, # name=None # ) grads_and_vars = opt.compute_gradients(loss) capped_grads_and_vars = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in grads_and_vars] opt.apply_gradients(capped_grads_and_vars)
# tf.clip_by_global_norm( # t_list, # clip_norm, # use_norm=None, # name=None # ) # Returns: # list_clipped: A list of Tensors of the same type as list_t. # global_norm: A 0-D (scalar) Tensor representing the global norm. opt = tf.train.AdamOptimizer(lr) grads, vars = zip(*opt.compute_gradients(loss)) grads, _ = tf.clip_by_global_norm(grads, 5.0) train = opt.apply_gradients(zip(grads, vars))
tensorflow Optimizer.minimize()和gradient clipping
标签:color ESS clip scala .com present 梯度 cli and
原文地址:https://www.cnblogs.com/esoteric/p/9319266.html