标签:
Dr. Ben Graham is an Assistant Professor in Statistics and Complexity at the University of Warwick. With a categorization accuracy of 0.95530 he ranked first place.
Thank you! I am very pleased to have won, and quite frankly pretty amazed at just how competitive the competition was.
When I first saw the competition, I did not think the test error would go below about 8%. I assumed 32x32 pixels just wasn‘t enough information to identify objects very reliably. As it turned out, everyone in the top 10 got below 7%, which is roughly on a par with human performance.
It is a deep convolutional network trained using SparseConvNet with architecture:
input=(3x126x126) - 320C2 - 320C2 - MP2 - 640C2 - 10% dropout - 640C2 - 10% dropout - MP2 - 960C2 - 20% dropout - 960C2 - 20% dropout - MP2 - 1280C2 - 30% dropout - 1280C2 - 30% dropout - MP2 - 1600C2 - 40% dropout - 1600C2 - 40% dropout - MP2 - 1920C2 - 50% dropout - 1920C1 - 50% dropout - 10C1 - Softmax output
It was trained taking advantage of:
The same architecture produces a test error of 20.68% for CIFAR-100.
The network took about 90 hours to train on an NVIDIA GeForce GTX780 graphics card. I had already written a convolutional neural network for spatially-sparse inputs to learn to recognise online Chinese handwriting.
Over the course of the competition I upgraded the program to allowdropout to be applied batchwise, and cleaned up some kernels that were accessing memory inefficiently. That made it feasible to train pretty large networks.
Which papers/approaches authored by other scientists did contribute the most to your top score?
The network architecture is the result of borrowing ideas from a number of recent papers
Reading each of those papers was jaw-dropping as the ideas would not have occurred to me.
I am very interested in the idea of spatially-sparse 3d convolutional networks. For example, given a length of string, you might be able to pull both ends to produce a straight line. Alternatively, the string might contain a knot which you cannot get rid of no matter how hard you pull. That is an idea that is obvious for humans, but hard to solve by computer as there are so many different kinds on knots.
Hopefully 3d convolutional networks can develop some of the physical intuition humans take for granted.
Besides convnets, I am very interested in machine learning techniques for time-series data, such as recurrent neural networks.
My pleasure; it was nice to see a couple of the other teams in the top 10 ("Jiki" and "Phil & Triskelion & Kazanova") use the code. Another Kaggler, Nagadomi, also made his code available during the competition. It was fascinating to see him implement some of the ideas to come out of the ILSVRC2014 competition such as "C3-C3-MP2" layers and Inception layers.
After the competition, I re-ran my top network on the 10,000 images from the original CIFAR-10 test set, resulting in 446 errors.
Here is a confusion matrix for showing where the 446 errors come from:
airplane automobile bird cat deer dog frog horse ship truck airplane 0 3 10 2 2 0 2 0 16 3 automobile 1 0 1 0 0 0 0 0 3 12 bird 8 1 0 14 19 8 9 5 2 0 cat 4 1 8 0 9 57 20 2 5 2 deer 3 1 12 7 0 5 4 8 0 0 dog 4 1 7 39 10 0 1 7 1 1 frog 4 0 7 7 3 1 0 1 0 1 horse 6 0 3 4 7 8 0 0 0 0 ship 2 3 2 0 0 1 0 0 0 3 truck 3 20 0 2 0 0 1 0 7 0
Looking at some of the 446 misclassified images, it seems that there is plenty of room for improvement in accuracy. I am sure there is also scope for improving the efficiency of the network.
Lots of them: Alan Turing, Yann LeCun, Geoffrey Hinton, Andrew Ng,Jürgen Schmidhuber, Yoshua Bengio, Rob Fergus, Alex Krizhevsky, Ilya Sutskever, ...
I was very surprised how much of a difference fine-tuning (finishing off training the network using a small number of training epochs with a low learning rate and without data augmentation) made.
My pleasure. Academia can be a bit antisocial, so it is lovely to see so much enthusiasm going into Kaggle competitions.
Phil Culliton is a game developer and Senior Researcher at an NLP startup. With a score of 0.94120 his team scored 6th place.
Our 6th place submission used multiple iterations (with varying epoch counts) of a single network architecture.
We also used a "trick" suggested by Dr. Graham which incorporated a small number of epochs that used no affine transformations.
The network architecture in question was Dr. Graham‘s spatially sparse CNN. It used 12 LeNet layers and a final softmax layer - it looked roughly like this (this is modified output from Dr. Graham‘s code):
LeNetLayer 128 neurons, VeryLeakyReLU LeNetLayer 128 neurons, VeryLeakyReLU MP2 LeNetLayer 384 neurons, Dropout 0.0833333 VeryLeakyReLU LeNetLayer 384 neurons, VeryLeakyReLU MP2 LeNetLayer 768 neurons, Dropout 0.208333 VeryLeakyReLU LeNetLayer 768 neurons, VeryLeakyReLU MP2 LeNetLayer 1280 neurons, Dropout 0.3 VeryLeakyReLU LeNetLayer 1280 neurons, VeryLeakyReLU MP2 LeNetLayer 1920 neurons, Dropout 0.4 VeryLeakyReLU LeNetLayer 1920 neurons, VeryLeakyReLU MP2 LeNetLayer 2688 neurons, Dropout 0.5 VeryLeakyReLU LeNetLayer 2688 neurons, VeryLeakyReLU LeNetLayer 10 neurons, Softmax Classification
The "MP" entries above denote max pooling, and "VeryLeakyReLU" denotes a "leaky" ReLU with a fairly large (alpha was 0.33) non-zero gradient.
DropOut was implemented in a straightforward manner. I considered adding DropConnect into the mix but ran out of time to test it.
Input images were distorted using a semi-random system of stretching and flipping - I played around with this but also ran out of time to properly validate it.
Earlier in the competition I did attempt to ensemble multiple network architectures but none of them outperformed the top contender.
I mention this again later, but getting CUDA installed and running properly on various machines turned out to be a much bigger task than I thought it would be - it was difficult and time-consuming. I‘m an old hand at getting cranky C code to compile - like, say, porting Windows codebases to OSX - so when I say I saw some weird stuff in trying to get CUDA-based libraries to run, I mean it.
Also - in a normal Kaggle competition I try to make use of all of the submissions available to me, even if it‘s just to try oddball approaches that may or may not work. However, for CIFAR-10, coming up with the machine time was an issue. I farmed the work out over AWS GPU servers as well as multiple local servers, but AWS quickly became expensive and eventually I had to stop using it.
Finding the right ratio of network size / sample batch size / speed for each server also took some care. I discovered that sample batch size (the number of samples sent to the GPU at a time) actually had an effect on final results, although I haven‘t yet quantified it. I‘d be interested in exploring that further.
For the top submission‘s neural networks we used Dr. Graham‘s reference code in CUDA / C++, with variations in parameters and some extremely minor changes.
The biggest pro was speed - we were training simply enormous networks and it could only have worked using GPGPUs. The cons - complexity of setup and installation. Each machine‘s CUDA install was a new mini-adventure, some of which didn‘t turn out so well. I hadn‘t played with CUDA much before, and frankly I‘m not too enamored with it. Getting it working properly - and compiling OpenCV with it! - on OSX was ridiculously hard. Eventually I switched over to all-Linux CUDA servers, where the task was marginally easier.
Luckily Dr. Graham‘s code was very adaptable and didn‘t have any strange library requirements - several of the other libraries we attempted to use required very specific / old versions of CUDA and would only work if you had a particular compiler, etc., or weren‘t amenable to running on one platform or another.
I also tried simple neural networks using H2O in R, kNN with scikit-learn, and Vowpal Wabbit. I‘m a pretty heavy user of the latter two, but H2O was new to me. All produced interesting results, but none ground-breaking.
I did really like H2O‘s deep learning implementation in R, though - the interface was great, the back end extremely easy to understand, and it was scaleable and flexible. Definitely a tool I‘ll be going back to.
Several. DropOut, DropConnect, and network architecture papers were heavily featured. I had just been doing some NN work in my day job so I got some dual-purpose reading done.
I heartily recommend Dr. Graham‘s preprint about the architecture we used - you can find it on his website.
I spent a fair bit of time on fastml.com as well - their articles on CIFAR-10 were beyond useful.
This was my first time using convnets for anything! I was impressed with their power and accuracy. I was also impressed at the number of GPU hours (and expense) it took to run a decent-sized network. It certainly isn‘t for the impatient or faint of heart.
I strongly suspect that deep learning / NNs will bubble toward the top of my toolbox for some problems. Definitely on anything remotely similar to CIFAR I‘ll be headed to the code from this competition first - probably with an email to Dr. Graham shortly thereafter.
Sure! I noticed that Dr. Graham was consistently on the top of the leaderboard and clicked through his Kaggle profile to find out if he was working for an ML company or using a particular product. There wasn‘t anything on his profile except for a link that was only partially visible, so I hopped on Google and dug around a bit.
It was a slightly convoluted process, but I eventually made my way tohis website and noted that he had several sets of sample / reference code for dealing with CIFAR-10 that were freely available and accompanied by (rather excellent) write-ups. I grabbed a set and started trying to work with it, had some problems getting it going, and sent him a question. I figured I wouldn‘t hear back from him - frankly I wasn‘t sure whether he‘d be willing to help his competition.
However, within a few hours he‘d sent me a version of the code with all the issues ironed out and some friendly comments! Shortly thereafter he shared that same code on the forums, which was great as that got even more people using it.
We kept in touch during the competition, whenever he updated the code on the forums he‘d send me an email letting me know, and encouraging me to keep trying (although by the end of the competition it was pretty clear to me that he was going to win).
He was a great sport, a tremendous help and I‘m looking forward to seeing more of his work in the future.
Zygmunt Zaj?c is the author of FastML and a Machine Learning Researcher. He used DropConnect to improve his accuracy to 0.90660, good for 18th place.
I have used models trained by Li Wan, the author of DropConnect. The details are outlined in the paper: Regularization of Neural Networks using DropConnect.
The challenges were getting the data in and the predictions out, as usual. In this case it meant converting raw images into cuda-convnet format and learning how to get the predictions from the library.
On top of that, getting DropConnect code to work was a bit tricky. You can read about the journey here:
I used Alex Krizhevsky’s cuda-convnet extended with Li Wan‘s code. Cuda-convnet struck me as a very well designed and implemented library.
No.
Mainly Hinton‘s et al. dropout paper and Li Wan‘s et al. DropConnect paper. There are other references in the FastML articles mentioned above.
It was my first brush with convolutional networks, I gained a general idea of how they work. Also that it isn‘t as easy to overfit as I thought.??
About DropConnect, it seems to offer results similiar to dropout. State of the art scores reported in the paper come from model ensembling.
I went into the cats and dogs competition after CIFAR-10, exposure to convnets certainly helped. Generally the knowledge can be directly applied to contests dealing with images.
I exchanged a few emails with Li Wan after I asked him for help with getting his code to work. I mentioned the article I was writing and he saw it fit to post a link.
These interviews were conducted over email. I would like to thank everyone for taking part in these interviews, and I hope the resulting article may serve as a resource for convolutional nets, the CIFAR-10 dataset and the Kaggle competition.
Read our interview with a founding father of convolutional nets, Yann LeCun, here >>
CIFAR-10 Competition Winners: Interviews with Dr. Ben Graham, Phil Culliton, & Zygmunt Zaj?c
标签:
原文地址:http://www.cnblogs.com/yymn/p/4718651.html