码迷,mamicode.com
首页 > 其他好文 > 详细

keras使用多GPU并行训练模型 | keras multi gpu training

时间:2019-11-30 11:24:23      阅读:107      评论:0      收藏:0      [点我收藏+]

标签:copyright   ase   generate   temp   ISE   loss   str   hat   compile   

本文首发于个人博客https://kezunlin.me/post/95370db7/,欢迎阅读最新内容!

keras multi gpu training

Guide

multi_gpu_model

import tensorflow as tf
from keras.applications import Xception
from keras.utils import multi_gpu_model
import numpy as np

G = 8 
batch_size_per_gpu = 32
batch_size = batch_size_per_gpu * G

num_samples = 1000
height = 224
width = 224
num_classes = 1000

# Instantiate the base model (or "template" model).
# We recommend doing this with under a CPU device scope,
# so that the model's weights are hosted on CPU memory.
# Otherwise they may end up hosted on a GPU, which would
# complicate weight sharing.
with tf.device('/cpu:0'):
    model = Xception(weights=None,
                     input_shape=(height, width, 3),
                     classes=num_classes)

# Replicates the model on 8 GPUs.
# This assumes that your machine has 8 available GPUs.
parallel_model = multi_gpu_model(model, gpus=G)
parallel_model.compile(loss='categorical_crossentropy',
                       optimizer='rmsprop')

# Generate dummy data.
x = np.random.random((num_samples, height, width, 3))
y = np.random.random((num_samples, num_classes))

# This `fit` call will be distributed on 8 GPUs.
# Since the batch size is 256, each GPU will process 32 samples.
parallel_model.fit(x, y, epochs=20, batch_size=batch_size)

# Save model via the template model (which shares the same weights):
model.save('my_model.h5')

results

results from Multi-GPU training with Keras, Python, and deep learning on Onepanel.io
To validate this, we trained MiniGoogLeNet on the CIFAR-10 dataset with 4 V100 GPU.

Using a single GPU we were able to obtain 63 second epochs with a total training time of 74m10s.
However, by using multi-GPU training with Keras and Python we decreased training time to 16 second epochs with a total training time of 19m3s.
4x times speedup!

Reference

History

  • 20190910:: created.

Copyright

keras使用多GPU并行训练模型 | keras multi gpu training

标签:copyright   ase   generate   temp   ISE   loss   str   hat   compile   

原文地址:https://www.cnblogs.com/kezunlin/p/11961533.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!