码迷,mamicode.com
首页 > 其他好文 > 详细

[转]Theano下用CNN(卷积神经网络)做车牌中文字符OCR

时间:2017-01-03 19:02:22      阅读:448      评论:0      收藏:0      [点我收藏+]

标签:org   mic   max   pat   pre   spec   network   out   tps   

Theano下用CNN(卷积神经网络)做车牌中文字符OCR

原文地址:http://m.blog.csdn.net/article/details?id=50989742

 

之前时间一直在看 Michael Nielsen 先生的 Deep Learning 教程

 

用了他的代码在theano下测试了下中文车牌字符的识别。由于我没有GPU,简单的在进行了16个epoch之后,识别率达到了 98.41% ,由于图像本来质量就不高,达到这个识别率,效果挺不错了。

技术分享

一共 31 类 车牌中文字符数据来源于中文车牌识别项目 EasyPR 的数据集 . 由于数据集分布很不均匀。可能会导致个别类别拟合不一致,而降低识别率。所以使用随机轻微扭曲图像的方式来生成新的数据以保证数据集各个类目的数量的均衡。

下面是用于轻微扭曲图像来生成更多样本的函数。

 

def rotRandrom(img,factor,size):
    """ 使图像轻微的畸变
    
        img 输入图像
        factor 畸变的参数
        size 为图片的目标尺寸

    """
    img = img.reshape(size);
    shape = size;

    pts1 = np.float32([[0,0],[0,shape[0]],[shape[1],0],[shape[1],shape[0]]])
    pts2 = np.float32([[r(factor),r(factor)],[0,shape[0]-r(factor)],[shape[1]-r(factor),0],[shape[1]-r(factor),shape[0]-r(factor)]])
    M  = cv2.getPerspectiveTransform(pts1,pts2);
    dst = cv2.warpPerspective(img,M,(shape[0],shape[1]));
    return dst.ravel();

 

 

在训练的时候 使用的CNN结构如下

激活函数都为 ReLu

Conv(kernel size 5*5 ) * 25个 feature map->  

Pooling 2*2  -> 

Conv(kernel size 5*5) * 16个feature map->

Pooling 2*2 ->

FullConnectedLayer 120 个 Neurons -> 

FullConnectedLayer 84个 Neurons ->

Softmax Output 31类

 

 

 

import network3
from network3 import Network
from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer
from network3 import ReLU

training_data, validation_data, test_data= network3.load_data_cPickle("./data.pkl")
mini_batch_size = 10
net = Network([
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
                      filter_shape=(25, 1, 5, 5),
                      poolsize=(2, 2),
                      activation_fn=ReLU),
        ConvPoolLayer(image_shape=(mini_batch_size, 25, 12, 12),
                      filter_shape=(16, 25, 5, 5),
                      poolsize=(2, 2),
                      activation_fn=ReLU),
        FullyConnectedLayer(n_in=16*4*4, n_out=120, activation_fn=ReLU),
         FullyConnectedLayer(n_in=120, n_out=84, activation_fn=ReLU),
        SoftmaxLayer(n_in=84, n_out=31)], mini_batch_size )
net.SGD(training_data, 60, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)

 

这个函数用于制作自己的数据。

 

def make_dataset(dirn):
    set = [];
    labels = [] ;

    def findinside(dirname,code,):
        print "code",code;
        print "dirname",dirname;


        for parent,dirnames,filenames in os.walk(dirname):
            adder = 1400 -  len(filenames)
            len_d = len(filenames)
            for filename in filenames:
                path  =parent+"/"+filename
                if(path.endswith(".jpg")):
                            img = cv2.imread(path,cv2.CV_LOAD_IMAGE_GRAYSCALE);
                            img = cv2.resize(img,(28,28));
                            img = img.astype(np.float32)/255;


                            set.append(img.ravel());
                            labels.append(code);
            for i in range(adder):
                c_index = int(np.random.rand() * len_d);
                l_set = len(set)
                set.append(rotrandom.rotRandrom( set[l_set-len_d + c_index],0.88,(28,28)));

                labels.append(code);

        print len(set),dirname,len(filenames)

    for parent,dirnames,filenames in os.walk(dirn):
            num = len(dirnames);
            for i in range(num):
                        c_path = dir_chars + "/"+ dirnames[i];
                        findinside(c_path,i);


    shuffle = np.random.permutation(len(set));

    print len(set)
    set = np.array(set);
    labels = np.array(labels);
    set, labels = set[shuffle], labels[shuffle]
    train_n = int(0.9*len(set))

    training_set,test_set = np.split(set, [train_n])
    training_labels, test_labels = np.split(labels, [train_n])
    print training_labels
    validation_set = test_set.copy();
    validation_labels = test_set.copy();
    training_data = [training_set,training_labels]
    validation_data = [validation_set,validation_labels]

    test_data = [test_set,test_labels]

    data = [ training_data, validation_data, test_data];
    fileid  = open("./data.pkl","wb")
    cPickle.dump(data,fileid)









dir_chars = "./charsChinese"
make_dataset(dir_chars);

 

在进行了第14个epoch之后获得了 98.41% 训练时间大概在10分钟左右。

 

 

Training mini-batch number 0
Training mini-batch number 1000
Training mini-batch number 2000
Training mini-batch number 3000
Epoch 0: validation accuracy 89.15%
This is the best validation accuracy to date.
The corresponding test accuracy is 89.15%
Training mini-batch number 4000
Training mini-batch number 5000
Training mini-batch number 6000
Training mini-batch number 7000
Epoch 1: validation accuracy 94.65%
This is the best validation accuracy to date.
The corresponding test accuracy is 94.65%
Training mini-batch number 8000
Training mini-batch number 9000
Training mini-batch number 10000
Training mini-batch number 11000
Epoch 2: validation accuracy 95.44%
This is the best validation accuracy to date.
The corresponding test accuracy is 95.44%
Training mini-batch number 12000
Training mini-batch number 13000
Training mini-batch number 14000
Training mini-batch number 15000
Epoch 3: validation accuracy 96.13%
This is the best validation accuracy to date.
The corresponding test accuracy is 96.13%
Training mini-batch number 16000
Training mini-batch number 17000
Training mini-batch number 18000
Training mini-batch number 19000
Epoch 4: validation accuracy 96.91%
This is the best validation accuracy to date.
The corresponding test accuracy is 96.91%
Training mini-batch number 20000
Training mini-batch number 21000
Training mini-batch number 22000
Training mini-batch number 23000
Epoch 5: validation accuracy 96.52%
Training mini-batch number 24000
Training mini-batch number 25000
Training mini-batch number 26000
Training mini-batch number 27000
Epoch 6: validation accuracy 96.87%
Training mini-batch number 28000
Training mini-batch number 29000
Training mini-batch number 30000
Training mini-batch number 31000
Epoch 7: validation accuracy 96.87%
Training mini-batch number 32000
Training mini-batch number 33000
Training mini-batch number 34000
Training mini-batch number 35000
Epoch 8: validation accuracy 97.58%
This is the best validation accuracy to date.
The corresponding test accuracy is 97.58%
Training mini-batch number 36000
Training mini-batch number 37000
Training mini-batch number 38000
Training mini-batch number 39000
Epoch 9: validation accuracy 97.49%
Training mini-batch number 40000
Training mini-batch number 41000
Training mini-batch number 42000
Epoch 10: validation accuracy 97.60%
This is the best validation accuracy to date.
The corresponding test accuracy is 97.60%
Training mini-batch number 43000
Training mini-batch number 44000
Training mini-batch number 45000
Training mini-batch number 46000
Epoch 11: validation accuracy 97.93%
This is the best validation accuracy to date.
The corresponding test accuracy is 97.93%
Training mini-batch number 47000
Training mini-batch number 48000
Training mini-batch number 49000
Training mini-batch number 50000
Epoch 12: validation accuracy 97.83%
Training mini-batch number 51000
Training mini-batch number 52000
Training mini-batch number 53000
Training mini-batch number 54000
Epoch 13: validation accuracy 98.04%
This is the best validation accuracy to date.
The corresponding test accuracy is 98.04%
Training mini-batch number 55000
Training mini-batch number 56000
Training mini-batch number 57000
Training mini-batch number 58000
Epoch 14: validation accuracy 98.20%
This is the best validation accuracy to date.
The corresponding test accuracy is 98.20%
Training mini-batch number 59000
Training mini-batch number 60000
Training mini-batch number 61000
Training mini-batch number 62000
Epoch 15: validation accuracy 97.86%
Training mini-batch number 63000
Training mini-batch number 64000
Training mini-batch number 65000
Training mini-batch number 66000
Epoch 16: validation accuracy 98.41%
This is the best validation accuracy to date.
The corresponding test accuracy is 98.41%

[转]Theano下用CNN(卷积神经网络)做车牌中文字符OCR

标签:org   mic   max   pat   pre   spec   network   out   tps   

原文地址:http://www.cnblogs.com/Crysaty/p/6245714.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!