码迷,mamicode.com
首页 > 其他好文 > 详细

cs20_8-1

时间:2019-02-13 18:17:14      阅读:178      评论:0      收藏:0      [点我收藏+]

标签:like   work   google   sha   sum   question   sse   The   print   

1. VAE

  1. lecture link: https://docs.google.com/presentation/d/1VSNlkGcR-b39tMcuREjzZdhYOPvoZudpcbuNlf5hOIM/edit#slide=id.g334db163d4_0_41
  2. todo, 我打印了个pdf, tutorial to VAE

2. tensorflow distribution(并不是分布式!而是概率分布)

  1. PPL(probabilities program language)
  2. tensorflow distribution

3. VAE in Tensorflow

  1. 如下面测试代码

    import os
    
    os.environ["CUDA_VISIBLE_DEVICES"] = "0"
    #
    import tensorflow as tf
    import numpy as np
    import scipy.misc
    
    def make_prior(code_size=2):
      mean, stddev = tf.zeros([code_size]), tf.ones([code_size])
      return tfd.MultivariateNormalDiag(mean, stddev)
    
    def make_encoder(images, code_size=2):
      images = tf.layers.flatten(images)
      hidden = tf.layers.dense(images, 100, tf.nn.relu)
      mean = tf.layers.dense(hidden, code_size)
      stddev = tf.layers.dense(hidden, code_size, tf.nn.softplus)
      return tfd.MultivariateNormalDiag(mean, stddev)
    
    def make_decoder(code, data_shape=[28, 28]):
      hidden = tf.layers.dense(code, 100, tf.nn.relu)
      logit = tf.layers.dense(hidden, np.prod(data_shape))
      logit = tf.reshape(logit, [-1] + data_shape)
      return tfd.Independent(tfd.Bernoulli(logit), len(data_shape))
    
    tfd = tf.contrib.distributions
    images = tf.placeholder(tf.float32, [None, 28, 28])
    prior = make_prior()
    posterior = make_encoder(images)
    dist = make_decoder(posterior.sample())
    elbo = dist.log_prob(images) - tfd.kl_divergence(posterior, prior)
    optimize = tf.train.AdamOptimizer().minimize(-elbo)
    samples = make_decoder(prior.sample(10)).mean()  # For visualization
    print("samples-shape: ", tf.shape(samples))
    print("samples: : ", samples)
    # 转换shape 为 28x28xC //现在是 10x28x28,我要换成 28x28x10,利用 tf.transpose
    samples = tf.transpose(samples, [1,2,0])
    samples = samples[:][:][0] # 同理,使用 for 可以找出其他剩余9张图片
    print("samples-1: : ", samples)
    
    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())
        sess.run(tf.global_variables_initializer())
        img_numpy = samples.eval(session=sess) # tensor to numpy_arr
        print(type(img_numpy))
    scipy.misc.imsave('VAE_TF.png', img_numpy) # numpy_arr to image
    
    
    # The tfd.Independent(dist, 2) tells TensorFlow to treat the two innermost dimensions as
    #  data dimensions rather than batch dimensions
    # This means dist.log_prob(images) returns
    #  a number per images rather than per pixel
    # As the name tfd.Independent() says,
    # it's just summing the pixel log probabilities

4. BNN in Tensorflow

  1. 如下示例代码:

    import os
    
    os.environ["CUDA_VISIBLE_DEVICES"] = "0"
    #
    import tensorflow as tf
    import numpy as np
    
    # Byase NN
    
    def define_network(images, num_classes=10):
      mean = tf.get_variable('mean', [28 * 28, num_classes])
      stddev = tf.get_variable('stddev', [28 * 28, num_classes])
      prior = tfd.MultivariateNormalDiag(
          tf.zeros_like(mean), tf.ones_like(stddev))
      posterior = tfd.MultivariateNormalDiag(mean, tf.nn.softplus(stddev))
      bias = tf.get_variable('bias', [num_classes])  # Or Bayesian, too
      logit = tf.nn.relu(tf.matmul(posterior.sample(), images) + bias)
      return tfd.Categorical(logit), posterior, prior
    
    tfd = tf.contrib.distributions
    images = None # to do
    label = None # to do
    
    dist, posterior, prior = define_network(images)
    elbo = (tf.reduce_mean(dist.log_prob(label)) -
            tf.reduce_mean(tfd.kl_divergence(posterior, prior)))

cs20_8-1

标签:like   work   google   sha   sum   question   sse   The   print   

原文地址:https://www.cnblogs.com/LS1314/p/10371229.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!