标签:ras post function learn gpu 学习 label tracking pretty
測试代码已上传至GitHub:yhlleo/mnist
将MNIST数据集,下载后复制到目录Mnist_data
中,假设已经配置好tensorflow
环境,基本的四个測试代码文件,都能够直接编译执行:
mnist_softmax.py
: MNIST机器学习入门mnist_deep.py
: 深入MNISTfully_connected_feed.py
: TensorFlow运作方式入门mnist_with_summaries.py
: Tensorboard训练过程可视化mnist_softmax.py
执行结果比較简单,就不列举。
mnist_deep.py
迭代执行较为耗时,结果已显示在博客: 深入MNIST code測试 。
fully_connected_feed.py
的执行结果例如以下(本人电脑为2 CPU,没有使用GPU):
Extracting Mnist_data/train-images-idx3-ubyte.gz
Extracting Mnist_data/train-labels-idx1-ubyte.gz
Extracting Mnist_data/t10k-images-idx3-ubyte.gz
Extracting Mnist_data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 2
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 2
Step 0: loss = 2.33 (0.023 sec)
Step 100: loss = 2.09 (0.007 sec)
Step 200: loss = 1.76 (0.009 sec)
Step 300: loss = 1.36 (0.007 sec)
Step 400: loss = 1.12 (0.007 sec)
Step 500: loss = 0.74 (0.008 sec)
Step 600: loss = 0.78 (0.006 sec)
Step 700: loss = 0.69 (0.007 sec)
Step 800: loss = 0.67 (0.007 sec)
Step 900: loss = 0.52 (0.010 sec)
Training Data Eval:
Num examples: 55000 Num correct: 47532 Precision @ 1: 0.8642
Validation Data Eval:
Num examples: 5000 Num correct: 4360 Precision @ 1: 0.8720
Test Data Eval:
Num examples: 10000 Num correct: 8705 Precision @ 1: 0.8705
Step 1000: loss = 0.56 (0.013 sec)
Step 1100: loss = 0.50 (0.145 sec)
Step 1200: loss = 0.33 (0.007 sec)
Step 1300: loss = 0.44 (0.006 sec)
Step 1400: loss = 0.39 (0.006 sec)
Step 1500: loss = 0.33 (0.009 sec)
Step 1600: loss = 0.56 (0.008 sec)
Step 1700: loss = 0.50 (0.007 sec)
Step 1800: loss = 0.42 (0.006 sec)
Step 1900: loss = 0.41 (0.006 sec)
Training Data Eval:
Num examples: 55000 Num correct: 49220 Precision @ 1: 0.8949
Validation Data Eval:
Num examples: 5000 Num correct: 4520 Precision @ 1: 0.9040
Test Data Eval:
Num examples: 10000 Num correct: 9014 Precision @ 1: 0.9014
[Finished in 22.8s]
mnist_with_summaries.py
主要提供了一种在Tensorboard可视化方法,首先。编译执行代码:
执行完成后,打开终端Terminal
,输入tensorboard --logdir=/tmp/mnist_logs
(与writer = tf.train.SummaryWriter(‘/tmp/mnist_logs‘, sess.graph_def)
中的文件路径一致),终端中就会执行显示:Starting TensorBoard on port 6006 (You can navigate to http://localhost:6006)
然后,打开浏览器,输入链接http://localhost:6006
:
当中,有一些选项。比如菜单条里包含EVENTS, IMAGES, GRAPH, HISTOGRAMS
,都能够一一点开查看~
另外,此时假设不关闭该终端,是无法在其它终端中又一次生成可视化结果的,会出现端口占用的错误。很多其它具体信息能够查看英文原文:TensorBoard: Visualizing Learning。
如有纰漏,欢迎指正!
标签:ras post function learn gpu 学习 label tracking pretty
原文地址:http://www.cnblogs.com/mthoutai/p/7041485.html