博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
CIFAR-10 Competition Winners: Interviews with Dr. Ben Graham, Phil Culliton, & Zygmunt Zając
阅读量:5339 次
发布时间:2019-06-15

本文共 15353 字,大约阅读时间需要 51 分钟。

Dr. Ben Graham

 is an Assistant Professor in Statistics and Complexity at the . With a categorization accuracy of 0.95530 he ranked first place.

Congratulations on winning the CIFAR-10 competition! How do you feel about your victory?

Thank you! I am very pleased to have won, and quite frankly pretty amazed at just how competitive the competition was.

When I first saw the competition, I did not think the test error would go below about 8%. I assumed 32x32 pixels just wasn't enough information to identify objects very reliably. As it turned out, everyone in the top 10 got below 7%, which is roughly .

Can you tell us about the setup of the network? How many layers?

It is a deep convolutional network trained using  with architecture:

input=(3x126x126) -320C2 - 320C2 - MP2 -640C2 - 10% dropout - 640C2 - 10% dropout - MP2 -960C2 - 20% dropout - 960C2 - 20% dropout - MP2 -1280C2 - 30% dropout - 1280C2 - 30% dropout - MP2 -1600C2 - 40% dropout - 1600C2 - 40% dropout - MP2 -1920C2 - 50% dropout - 1920C1 - 50% dropout - 10C1 - Softmax output

It was trained taking advantage of:

  • spatial-sparsity in the 126x126 input layer,
  • batchwise dropout,
  • (very) leaky rectified linear units, and
  • affine spatial and color-space training data augmentation.

The same architecture produces a test error of 20.68% for CIFAR-100.

These cats evaded the DeepCNet solution by looking a lot like a fighter jet and a car.

Can you tell us a little about the hardware used to train the nets? How long did it take to train? What was the development cycle like?

The network took about 90 hours to train on an 780 graphics card. I had already written a convolutional neural network for spatially-sparse inputs to learn to .

Over the course of the competition I upgraded the program to allow to be applied batchwise, and cleaned up some kernels that were accessing memory inefficiently. That made it feasible to train pretty large networks.

Which papers/approaches authored by other scientists did contribute the most to your top score?

The network architecture is the result of borrowing ideas from a number of recent papers

  • ; Ciresan, Meier and Schmidhuber
  • ; Lin, Chen and Yan
  • ; Simonyan and Zisserman.

Reading each of those papers was jaw-dropping as the ideas would not have occurred to me.

These images were all correctly classified. To the net they look the most like their respective classes. From DeepCNet's extremes.

Where do you see convnets in the future? Anything in particular that you are excited about?

I am very interested in the idea of  3d convolutional networks. For example, given a length of string, you might be able to pull both ends to produce a straight line. Alternatively, the string might contain a knot which you cannot get rid of no matter how hard you pull. That is an idea that is obvious for humans, but hard to solve by computer as there are so many different kinds on knots.

Hopefully 3d convolutional networks can develop some of the physical intuition humans take for granted.

Besides convnets, I am very interested in machine learning techniques for time-series data, such as recurrent neural networks.

Thank you very much for sharing your  on the forums. What is your opinion on sharing code?

My pleasure; it was nice to see a couple of the other teams in the top 10 ("" and " &  & ") use the code. Another Kaggler, , also made his  available during the competition. It was fascinating to see him implement some of the ideas to come out of the  such as "C3-C3-MP2" layers and .

Do you think your convnet could be improved even more on this task, or do you feel it is close to its limit?

After the competition, I re-ran my top network on the 10,000 images from the original CIFAR-10 test set, resulting in 446 errors.

Here is a confusion matrix for showing where the 446 errors come from:

airplane automobile  bird   cat deer  dog frog horse ship truckairplane        0          3    10     2    2    0    2     0   16     3automobile      1          0     1     0    0    0    0     0    3    12bird            8          1     0    14   19    8    9     5    2     0cat             4          1     8     0    9   57   20     2    5     2deer            3          1    12     7    0    5    4     8    0     0dog             4          1     7    39   10    0    1     7    1     1frog            4          0     7     7    3    1    0     1    0     1horse           6          0     3     4    7    8    0     0    0     0ship            2          3     2     0    0    1    0     0    0     3truck           3         20     0     2    0    0    1     0    7     0

Looking at some of the 446 misclassified images, it seems that there is plenty of room for improvement in accuracy. I am sure there is also scope for improving the efficiency of the network.

Which machine learning scientist inspires you?

Lots of them: Alan Turing, Yann LeCun, Geoffrey Hinton, ,, , , , , ...

Anything of note on the competition and/or dataset that you found surprising? An approach that worked unexpectedly well, or perhaps did not work for you?

I was very surprised how much of a difference fine-tuning (finishing off training the network using a small number of training epochs with a low learning rate and without data augmentation) made.

Again, thank you very much for sharing your code. Our team would not have beaten the estimated human error rate without it!

My pleasure. Academia can be a bit antisocial, so it is lovely to see so much enthusiasm going into Kaggle competitions.

Phil Culliton

 is a game developer and Senior Researcher at an NLP startup. With a score of 0.94120 his team scored 6th place.

Can you tell about the architecture of the net? Number of layers etc.?

Our 6th place submission used multiple iterations (with varying epoch counts) of a single network architecture.

We also used a "trick" suggested by Dr. Graham which incorporated a small number of epochs that used no affine transformations.

The network architecture in question was Dr. Graham's spatially sparse CNN. It used 12 LeNet layers and a final softmax layer - it looked roughly like this (this is modified output from Dr. Graham's code):

LeNetLayer 128 neurons, VeryLeakyReLULeNetLayer 128 neurons, VeryLeakyReLU MP2LeNetLayer 384 neurons, Dropout 0.0833333 VeryLeakyReLULeNetLayer 384 neurons, VeryLeakyReLU MP2LeNetLayer 768 neurons, Dropout 0.208333 VeryLeakyReLULeNetLayer 768 neurons, VeryLeakyReLU MP2LeNetLayer 1280 neurons, Dropout 0.3 VeryLeakyReLULeNetLayer 1280 neurons, VeryLeakyReLU MP2LeNetLayer 1920 neurons, Dropout 0.4 VeryLeakyReLULeNetLayer 1920 neurons, VeryLeakyReLU MP2LeNetLayer 2688 neurons, Dropout 0.5 VeryLeakyReLULeNetLayer 2688 neurons, VeryLeakyReLULeNetLayer 10 neurons, Softmax Classification

The "MP" entries above denote max pooling, and "VeryLeakyReLU" denotes a "leaky" ReLU with a fairly large (alpha was 0.33) non-zero gradient.

DropOut was implemented in a straightforward manner. I considered adding  into the mix but ran out of time to test it.

Input images were distorted using a semi-random system of stretching and flipping - I played around with this but also ran out of time to properly validate it.

Earlier in the competition I did attempt to ensemble multiple network architectures but none of them outperformed the top contender.

The cat on the left looks most like a frog. The cats on the right trick the net into thinking it is a boat. From DeepCNet's extreme errors.

What were the technical challenges to overcome to produce submissions for this challenge?

I mention this again later, but getting CUDA installed and running properly on various machines turned out to be a much bigger task than I thought it would be - it was difficult and time-consuming. I'm an old hand at getting cranky C code to compile - like, say, porting Windows codebases to OSX - so when I say I saw some weird stuff in trying to get CUDA-based libraries to run, I mean it.

Also - in a normal Kaggle competition I try to make use of all of the submissions available to me, even if it's just to try oddball approaches that may or may not work. However, for CIFAR-10, coming up with the machine time was an issue. I farmed the work out over  as well as multiple local servers, but AWS quickly became expensive and eventually I had to stop using it.

Finding the right ratio of network size / sample batch size / speed for each server also took some care. I discovered that sample batch size (the number of samples sent to the GPU at a time) actually had an effect on final results, although I haven't yet quantified it. I'd be interested in exploring that further.

Which libraries did you use? Can you give some of the pros and cons?

For the top submission's neural networks we used Dr. Graham's reference code in CUDA / C++, with variations in parameters and some extremely minor changes.

The biggest pro was speed - we were training simply enormous networks and it could only have worked using GPGPUs. The cons - complexity of setup and installation. Each machine's CUDA install was a new mini-adventure, some of which didn't turn out so well. I hadn't played with CUDA much before, and frankly I'm not too enamored with it. Getting it working properly - and compiling  with it! - on OSX was ridiculously hard. Eventually I switched over to all-Linux CUDA servers, where the task was marginally easier.

Luckily Dr. Graham's code was very adaptable and didn't have any strange library requirements - several of the other libraries we attempted to use required very specific / old versions of CUDA and would only work if you had a particular compiler, etc., or weren't amenable to running on one platform or another.

Did you try anything else besides convolutional nets?

I also tried simple neural networks using  in R, kNN with , and . I'm a pretty heavy user of the latter two, but H2O was new to me. All produced interesting results, but none ground-breaking.

I did really like H2O's deep learning implementation in R, though - the interface was great, the back end extremely easy to understand, and it was scaleable and flexible. Definitely a tool I'll be going back to.

Did you read any papers for this competition?

Several. DropOut, DropConnect, and network architecture papers were heavily featured. I had just been doing some NN work in my day job so I got some dual-purpose reading done.

I heartily recommend Dr. Graham's  about the architecture we used - you can find it on .

I spent a fair bit of time on  as well - their  were beyond useful.

The DeepCNet architecture from "Graham B. (2014) Spatially-sparse convolutional neural networks"

What did you learn from this competition? First time using convnets for a Kaggle competition? Do you think you can apply any knowledge to future competitions?

This was my first time using convnets for anything! I was impressed with their power and accuracy. I was also impressed at the number of GPU hours (and expense) it took to run a decent-sized network. It certainly isn't for the impatient or faint of heart.

I strongly suspect that deep learning / NNs will bubble toward the top of my toolbox for some problems. Definitely on anything remotely similar to CIFAR I'll be headed to the code from this competition first - probably with an email to Dr. Graham shortly thereafter.

I heard you approached Dr. Ben Graham midway during the competition and he released code. Can you tell a little about how this came to be?

Sure! I noticed that Dr. Graham was consistently on the top of the leaderboard and clicked through his Kaggle profile to find out if he was working for an ML company or using a particular product. There wasn't anything on his profile except for a link that was only partially visible, so I hopped on Google and dug around a bit.

It was a slightly convoluted process, but I eventually made my way to and noted that he had several sets of sample / reference code for dealing with CIFAR-10 that were freely available and accompanied by (rather excellent) write-ups. I grabbed a set and started trying to work with it, had some problems getting it going, and sent him a question. I figured I wouldn't hear back from him - frankly I wasn't sure whether he'd be willing to help his competition.

However, within a few hours he'd sent me a version of the code with all the issues ironed out and some friendly comments! Shortly thereafter he shared that same code on the forums, which was great as that got even more people using it.

We kept in touch during the competition, whenever he updated the code on the forums he'd send me an email letting me know, and encouraging me to keep trying (although by the end of the competition it was pretty clear to me that he was going to win).

He was a great sport, a tremendous help and I'm looking forward to seeing more of his work in the future.

Zygmunt Zając

 is the author of  and a Machine Learning Researcher. He used DropConnect to improve his accuracy to 0.90660, good for 18th place.

Can you tell about the architecture of the net? Number of layers etc.?

I have used models trained by , the author of DropConnect. The details are outlined in the paper: .

Figure from the paper: "Wan L., ., , LeCun Y.,  (2013) Regularization of Neural Networks using DropConnect

What were the technical challenges to overcome to produce submissions for this challenge?

The challenges were getting the data in and the predictions out, as usual. In this case it meant converting raw images into cuda-convnet format and learning how to get the predictions from the library.

On top of that, getting DropConnect code to work was a bit tricky. You can read about the journey here:

Which libraries did you use? Can you give some of the pros and cons?

I used Alex Krizhevsky’s  extended with . Cuda-convnet struck me as a very well designed and implemented library.

Did you try anything else besides convolutional nets?

No.

Did you read any papers for this competition?

Mainly Hinton's et al.  and Li Wan's et al. . There are other references in the FastML articles mentioned above.

What did you learn from this competition? Did any knowledge from previous competitions (cats vs. dogs) transfer? Do you think you can apply any knowledge to future competitions?

It was my first brush with convolutional networks, I gained a general idea of how they work. Also that it isn't as easy to overfit as I thought.?

About DropConnect, it seems to offer results similiar to dropout. State of the art scores reported in the paper come from model ensembling.

I went into the  after CIFAR-10, exposure to convnets certainly helped. Generally the knowledge can be directly applied to contests dealing with images.

The dog on the left looks most like a horse. The dog on the right looks most like a cat. From DeepCNet's extreme errors.

I saw you being mentioned on the . Can you tell a little more about how this came to be?

I exchanged a few emails with Li Wan after I asked him for help with getting his code to work. I mentioned the article I was writing and he saw it fit to post a link.

About the interviews

These interviews were conducted over email. I would like to thank everyone for taking part in these interviews, and I hope the resulting article may serve as a resource for convolutional nets, the CIFAR-10 dataset and the Kaggle competition.

Read our interview with a founding father of convolutional nets, Yann LeCun, 

转载于:https://www.cnblogs.com/yymn/p/4718651.html

你可能感兴趣的文章
OpenLayers绘制图形
查看>>
tp5集合h5 wap和公众号支付
查看>>
Flutter学习笔记(一)
查看>>
iOS10 国行iPhone联网权限问题处理
查看>>
洛谷 P1991 无线通讯网
查看>>
[HIHO1184]连通性二·边的双连通分量(双连通分量)
查看>>
Codeforces Round #178 (Div. 2) B. Shaass and Bookshelf 【动态规划】0-1背包
查看>>
SparkStreaming 源码分析
查看>>
【算法】—— 随机音乐的播放算法
查看>>
mysql asyn 示例
查看>>
DataGrid 点击 获取 行 ID
查看>>
git 使用
查看>>
边框圆角方法
查看>>
asp.net WebApi自定义权限验证消息返回
查看>>
php中eval函数的危害与正确禁用方法
查看>>
20172315 2017-2018-2 《程序设计与数据结构》第十一周学习总结
查看>>
MySQL添加、修改、撤销用户数据库操作权限的一些记录
查看>>
C#中List和数组之间转换的方法
查看>>
屏幕分辨率过高导致软件界面显示过小影响使用
查看>>
ViewBag & ViewData
查看>>