FastGarden - A new ImageNette like competition (just for fun)

My best so far (without looking at what you folks have done) has been ~62-64 %. Time to learn some new tricks now, looking at your notebooks.

1 Like

I tried a bunch of different architectures and techniques on this. I found that you can learn a lot on the various things people try, but it is hard to differentiate the usefulness of each method in the end. This is because we’re looking for the best architecture for 5 epochs of training in 12k samples, and not necessarily the best architecture for the data set. As a result, design choices good for 5 epochs may not be good choices for what one would use in practice, where the number of epoch training and samples are not limited.

I probably won’t spend much more time as I’m not sure the “best practices” for this challenge, for 5 epochs and 12k samples, transfers well to practical use. Happy to hear what other people think.

Otherwise, if anyone wants to team up for kaggle challenges. I’m happy to join, but I’m pretty new to it. :-]

2 Likes

Had a day playing around with the basics yesterday, best so far was 71.24% (2nd currently) with progressive resizing. Looking forward to getting a little more exotic in the coming days.

Interestingly using xresnet50 with cnn_learner to get the fastai head (first 5 experiments) didn’t improve accuracy at all, the default xresnet50 head worked fine, lr_find after 2e @ 128 and data switched to 224:

Also, when using progressive resizing I tried to tune the lrs for each stage, but when using a lr for the second stage (224) based on lr_find (2e-3), performance dropped markedly. The network preferred the original lr (1e-2) which would appear to be “too high” given the lr_find plot.

repo

1 Like

Hey Morgan,

Just out of curiosity, do you have a script that runs through a bunch of different models, and then outputs that csv file? Or is it currently manually done?

Thanks - Daniel

Manual :slight_smile:

Current best result using a modified XSE-ResNeXt 50 based on the Imagenette XSE-ResNeXt with a custom (24,32,64) stem and GeM Pooling. Stem inspired by Ross Wrightman and GeM pooling by DrHB.

76.55% ± 0.25%: Notebook, second submission.

I also included just the (24,32,64) stem as Submission 1 which had an accuracy of 75.29% ± 1.09%. GeM required lowering the batch size from 64 to 56 with a P100.

Also tested a (24,48,64) stem, less augmentation, more augmentation, and bs=56 with AvgPool, all which scored worse.

6 Likes

Boom! Thats great! Maybe you could post your 2nd best solution on the leaderboard also? I’m thinking that it might help to see other high-performing methods, maybe we could allow/limit up to 2 entries per person on the leaderboard, what do you think @muellerzr?

1 Like

Just finished up a batch of experiments with xresnet50 and FixRess, the current top-performing ImageNet technique, no leaderboard progress however :frowning:

The idea is that training CNNs at a lower resolution than your test resolution can give better results. When training with the same resolution as the test set, the apparent size of objects in the train set will appear larger than in the test set, at least if you are using a transform like RandomResizedCrop. They address this by:

  1. Training at a lower resolution, e.g. train at 160, test at 224
  2. Fine-tune the classifier layer at the test image resolution

Probably it is a technique to squeeze out the last few % of performance and there are too few epochs in fastgarden to see its benefit. I did do 2 20 epoch runs out of curiosity with FixRes trained with 16epochs at 192 and fine-tuned (classifier layer only) for 4 epochs at 224, still it performed worse than a baseline run of 20 epochs at 224.

5 Likes

Sure, if you have another technique that could be interesting I’m thinking tag it to your highest-placed entry as perhaps interesting idea or something :slight_smile:

1 Like

There’s a range of training times in the notebooks in this thread and I was curious to know what setup’s everyone was using. My training are definitely on the higher(slower) end compared to others.

Laptop (Windows, GTX 1060) > 6mins/epoch
01

AWS (Tesla K80) ?> 7.40/epoch
02

The above 2 notebooks are identical.

@morgan > 4.30/epoch
03

@radek > 1.40/epoch :exploding_head:
04

@muellerzr > 1.40/epoch :exploding_head: (assuming colab)
05

@rahulrav > 6mins/epoch
06

@jwuphysics >2min30/epoch :exploding_head:
07

1 Like

Yes mine come from a T4 :slight_smile:

1 Like

Using a 2080 RTX Max-Q on a Laptop (more like a 2070 RTX super for realz).

Some timings here are crazy fast. Maybe it’s time to use that Colab Pro subscription.

1 Like

Yes for sure time to look at a Colab subscription

Here’s another data point for 2080 TI with/without fp16 (all settings similar to muellerzr except I change output classes to 104).

fp32


fp16

3 Likes

Wow! now that is fast!. Thanks for sharing. I was using a smaller bs in the original post due to memory issues which have now been resolved so higher bs brought my training times down but still no where near to this.

What can be helpful is seeing here the time is going, whether your are GPU bound (your GPU is fully under load, (nearly) all of the time, using all of its memory) or whether you are CPU bound. There are probably more advanced ways of going about this but this is how I do it.

I use tmux, so I open two new panes. In one, I run nvidia-smi -l - this regularly polls the GPU for stats (other people swear by nvidia-smi dmon but I often find nvidia-smi -l easier to read. In the other pane I start htop(this gives me my RAM/swap and CPU utilization). These stats generally suffice for figuring out how I am using my machine, whether I am using it to full potential. Sometimes, when I do something io heavy, I would also useiotop`.

As for colab, even PRO comes with only two vCPUs. I am not sure if this is what is happening here, but for many scenarios this can be a very limiting factor. File access times from google drive have also been slow for me - its best to copy the data locally before training and when moving files around its much faster to move one single file than multiple small ones.

The times from me come from 1080ti and Ryzen6.

3 Likes

Interesting! I’m using a 2080, 11GB GPU ram.

@radek how would you rate nvtop to monitor your GPU compared to nvidia-smi -l ? Its the main thing I monitor for gpu usuage, but haven’t tried nvidia-smi -l or htop

Looks really nice :slight_smile: Thank you for pointing me to it @morgan, will give it a try :slight_smile:

1 Like

Reporting a new best result using a novel[1] tweak to X-ResNeXt: MK-ResNeXt.

80.41% ± 0.73% : Notebook, Experiment 7.3.

MK-ResNeXt (Mixed Depthwise Convolutional Kernel or Multi-Kernel ResNeXt) modifies X-ResNeXt by replacing some convolutions with mixed depthwise convolutions (MDConv), which were proposed by Mingxing Tan & Quoc V. Le for MixNet.

The notebook contains two early variants, D & E, each with multiple configurations which usually score over 78%.

I also did some unoptimized tests at 20 epochs with MK-ResNeXt and my prior tweaked X-ResNeXt submission. MK-ResNeXt performed better but the lead dropped to 0.5-1%, depending on the configuration.

Based on these current experiments, I believe there is more tweaking to be done to find the best performing configuration. If anyone knows of good resources that cover handcrafting neural network architectures please send them my way.


  1. Novel to the best of my knowledge. If anyone knows of an existing paper or implementation, please let me know. It would save me a bunch of work. :slight_smile: ↩︎

7 Likes

Whoa thats awesome, well done @bwarner!

1 Like