[Project] Stanford-Cars with fastai v1

@morgan I asked the exact same question to myself. I think for research purpose it is okay to use test set as validation set. For the stanford car dataset in the competition I entered, i thought it would be “cheating” to use the test set as validation, although not specified.

Is the fit_fc function new in the library? I can’t seem to find it in the current version (1.0.57) library that I am using.

1 Like

@jianshen92 run !pip install git+https://github.com/fastai/fastai.git to grab the most recent version to grab the absolute newest version to use it

1 Like

Thanks both, I had been splitting the train set, but I think I’ll switch to using the test set for validation. I copied it from that crazy thread, but great that its being pushed to fastai, nice! :smiley:

I think if you want to compare the performance with other researcher (outside of fast.ai), it would be more accurate with an independent test set that is not used to benchmark your training. Being said im not sure how it is done when researcher report their results for benchmark dataset (imagenet etc.). @muellerzr do you have any insight for this?

1 Like

Generally how I do it is I use the labeled test set ‘trick’ that I found and I report two scores. A validation accuracy and a test set accuracy. If you do a search for labeled test sets on the forum and filter to responses from me you should be able to find the source code for my technique

1 Like

Thanks @muellerzr, nice tick, posting one of your answers here for future reference:

1 Like

I had been wondering the same @jianshen92, I don’t think I recall reading a paper where they specify whether or not they used the test set as validation. So I never new if it was just taken as a given or not…

For another example of them doing that, look at IMDB and how we train it. Jeremy does the same thing

1 Like

Positive signs with Ranger + Mish for EfficientNet-b3, 1-run test set accuracy of 93.9% for Stanford cars with EfficientNet-b3 after 40e. Their paper quoted 93.6% for b3. Note I’m training on the full training set here, using the test set for validation.

I didn’t play around with the hyperparameters at all, just took what seemed to work well for Ranger:

40 epoch
lr=15e-4
start_pct=0.10
wd=1e-3,

Will kick off 4 additional runs so I can get a 5 run average, but its slow going, 2h20m per run :smiley:

1 Like

Ok, finally got through running the model 5 times!

TL;DR

  • Achieved 93.8% 5-run, 40epoch, mean test set accuracy on Stanford Cars using Mish EfficientNet-b3 + Ranger
  • Beat the EfficientNet paper EfficientNet-b3 result by 0.2%
  • EfficientNet author’s best result using b3 was 93.6%, their best EfficientNet result was 94.8% (current SOTA) with EfficientNet-b7
  • Used MEfficientNet-b3, created by swapping the Squish activation function for the Mish activation function
  • Used the Ranger optimisation function (a combination of RAdam and Lookahead) and trained with FlatCosAnnealScheduler
  • EfficientNet-b3 with Ranger but without Mish was giving test set accuracy around 93.4% (-0.4%) but was still much more stable to train than my efforts to train EfficientNet with RMSProp (which was used in the original paper)

Quick Medium post here, my first post, feedback welcome!

Code in my github here

Mean accuracy and standard deviation:

meffnetb3_acc_std_dev

Validation set (=test set) accuracy, last 10 epochs:

Credits:

Training Params used:

  • 40 epoch
  • lr = 15e-4
  • start_pct = 0.10
  • wd = 1e-3
  • bn_wd=False
  • true_wd=True

Default Ranger params were used :

  • alpha=0.5
  • k=6
  • N_sma_threshhold=5
  • betas=(.95,0.999)
  • eps=1e-5

Augmentations used:

  • Image size : 299 x 299
  • Standard Fastai transforms from get_transforms() :
    • do_flip = True, max_rotate = 10.0, max_zoom = 1.1, max_lighting = 0.2, max_warp = 0.2, p_affine: float = 0.75, p_lighting = 0.75
  • ResizeMethod.SQUISH , which I found worked quite well from testing with ResNet152

Training Notes

  • Unlike testing done on the fastiai forums with XResNet and the Imagewoof dataset, this setup performed better with a shorter amount of time with a flat lr, followed by a longer cosine anneal.
  • I used the full test set as the validation set, similar to the Imagewoof thread in the fastai thread linked above
  • I manually restarted the gpu kernel and changed the run count as weights seemed to be being saved between runs. This persisted even when using learn.purge() and learn.destroy(). There had been a mention on the forums that the lookahead element of the Ranger implementation might have been responsible, but the problem persisted even after using version 9.3.19 which was supposed to address the issue.
  • Ran on a Paperspace P4000 machine
6 Likes

Hello, in you code I didn’t find any information about how to create EfficientNet with Mish, could you please give me more details about it? Thank you!

1 Like

Hello, in you code I didn’t find any information about how to create EfficientNet with Mish, could you please give me more details about it? Thank you!

Oh yep of course, I just replaced the relu_fn in the model.py file in the EfficientNet_PyTorch with the below:

def mish_fn(x):
    return x *(torch.tanh(F.softplus(x)))

Easy!

1 Like

For anyone that wants to go for more than b3, I found out that layers for others are:
b4 - 1792
b5 - 2048
b7 - 2560

If you want to change on model._fc

2 Likes

FYI I Get 87.69% training xresnet18 from scratch on Stanford-cars, 250 epochs with lr=1e-3, using one cycle.

Edit: that’s without Mixup

1 Like

nice result for a lightweight model!

94.79% with EfficientNet-b7 + Mish + Ranger

  • Achieved 94.79% (standard deviation of 0.094) 5-run, 40epoch, mean test set accuracy on Stanford Cars using Mish EfficientNet-b7 + Ranger (no mixup)
  • Matched the EfficientNet paper EfficientNet-b7 result of 94.7% (current SOTA is: 94.8% 96.2% from Domain Adaptive Transfer Learning with Specialist Models)

Code here: https://github.com/morganmcg1/stanford-cars

4 Likes

Nice work! Seems like you’ve figured out what works to train Efficientnet more easily! It’s actually a big thing in itself to have reproduced the paper’s results. Your work might help everyone doing transfer learning. I know I was struggling with Efficientnet when it first came out.

Which model has SoTA results?

Also, you’re saying in the other thread that Mish + Ranger help with training stability. Have you experimented to see if Ranger or Mish on their own helps?

I actually thought it was 94.8% as per the results table in the EfficientNet paper, but they cite Domain Adaptive Transfer Learning with Specialist Models which has a 96% if you look at the paper! From a glance, I think they pretrained an Inception-V3 model on only selection of ImageNet images (e.g. emphasis on cars).

AutoAugment: Learning Augmentation Policies from Data is the paper with 94.8% (they actually quote 5.2% error rate as their result)

I just updated the Papers With Code leader board here for the Stanford Cars dataset: https://paperswithcode.com/sota/fine-grained-image-classification-on-stanford

I had done some quick and dirty testing yes and found that individually they seemed easier to train than the default EfficientNet swish + rmsprop. But I don’t have much to back that up, it wasn’t a proper paper-style ablation test so maybe take it with a grain of salt :wink:

1 Like

Hello Morgan.

Great implementation. I am trying to run your example code with a simpler and different dataset.

Instead of the data_test loading that you are using, I am trying to use keras imagedata generator.

However when it is time to train the model I have an error:

How could I solve this error? Could you help me with this issue?

Thanks a lot.

I’m not sure you can mix and match the Keras image generator with fastai, better to stick within fastai if you can I think…