标签:des mes dig gslb split learning NPU clu kaggle
https://www.kaggle.com/zalando-research/fashionmnist
Fashion-MNIST is a dataset of Zalando‘s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn‘t work on MNIST, it won‘t work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Zalando seeks to replace the original MNIST dataset
https://github.com/fanqingsong/code-snippet/blob/master/machine_learning/FMNIST/code.py
# TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np print(tf.__version__) fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = [‘T-shirt/top‘, ‘Trouser‘, ‘Pullover‘, ‘Dress‘, ‘Coat‘, ‘Sandal‘, ‘Shirt‘, ‘Sneaker‘, ‘Bag‘, ‘Ankle boot‘] train_images = train_images / 255.0 test_images = test_images / 255.0 model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer=tf.train.AdamOptimizer(), loss=‘sparse_categorical_crossentropy‘, metrics=[‘accuracy‘]) model.fit(train_images, train_labels, epochs=5) test_loss, test_acc = model.evaluate(test_images, test_labels) print(‘Test accuracy:‘, test_acc) predictions = model.predict(test_images) print(test_labels[0]) print(np.argmax(predictions[0]))
root@DESKTOP-OGSLB14:~/mine/code-snippet/machine_learning/FMNIST#
root@DESKTOP-OGSLB14:~/mine/code-snippet/machine_learning/FMNIST# python code.py
1.14.0
WARNING: Logging before flag parsing goes to stderr.
W0816 23:26:49.741352 140630311962432 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling __init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0816 23:26:49.977197 140630311962432 deprecation_wrapper.py:119] From code.py:33: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
2019-08-16 23:26:50.289949: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-08-16 23:26:50.684455: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1992000000 Hz
2019-08-16 23:26:50.686887: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fffe64d99e0 executing computations on platform Host. Devices:
2019-08-16 23:26:50.686967: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2019-08-16 23:26:50.958569: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Epoch 1/5
60000/60000 [==============================] - 3s 50us/sample - loss: 0.4992 - acc: 0.8240
Epoch 2/5
60000/60000 [==============================] - 2s 40us/sample - loss: 0.3758 - acc: 0.8650
Epoch 3/5
60000/60000 [==============================] - 3s 42us/sample - loss: 0.3382 - acc: 0.8770
Epoch 4/5
60000/60000 [==============================] - 2s 41us/sample - loss: 0.3135 - acc: 0.8854
Epoch 5/5
60000/60000 [==============================] - 3s 42us/sample - loss: 0.2953 - acc: 0.8922
10000/10000 [==============================] - 0s 25us/sample - loss: 0.3533 - acc: 0.8715
(‘Test accuracy:‘, 0.8715)
9
9
root@DESKTOP-OGSLB14:~/mine/code-snippet/machine_learning/FMNIST#
https://github.com/MachineIntellect/DeepLearner/blob/master/basic_classification.ipynb
https://tensorflow.google.cn/beta/guide/data
fashion MNIST识别(Tensorflow + Keras + NN)
标签:des mes dig gslb split learning NPU clu kaggle
原文地址:https://www.cnblogs.com/lightsong/p/11366934.html