Switching to resnet 50 drastically improves results?


I am pretty new to all this, so please forgive me if something does not make sense or if I am asking stupid questions.

My task is to classify spectrograms from 10 sec sound files. There are two categories, “calls” and “nothing” (nothing category could contain silence, mechanical sounds, people talking, anything but calls)

resnet 34 results are very bad, not only error rate is 0.5, it also seems to be increasing

plot losses makes some sense, the model seems to struggle with atypical calls.

confusion matrix shows that calls are confused with nothing category

Moving on to unfreezing, learning rates and it made the error rate even worse, 0.69 now

Next, I switched to resnet 50 and results are now nearly perfect, 0.08 error rate! How is it possible?

plot losses makes sense too, unusual and faint calls are confused for nothing.

And confusion matrix got better too

So my question is, why resnet 34 gave basically nonsense results and resnet 50 gave nearly perfect results, like 0.08 error rate?

1 Like

In the first image you posted with resnet34, it looks like you aren’t setting a learning rate. If the default learning rate isn’t close to what is suggested by lr_find, you won’t get good results and could see the error rate not improving. So I’d suggest using the learning rate finder and passing a good learning rate to fit_one_cycle.

The natural question though - why would it work for resnet50? I think that would just be by chance - the default learning rate happens to be close to what works well for that model.


I did unfreeze and lr_find later, it made results even worse.

Today I tried to remove normalization, it improved results somewhat, but they are still too good (and resnet 50 still performs much better, although resnet 34 now makes a bit more sense with its results), so folks suggested I am overfitting most likely.

Did you start over or did you continue training the one from above? Once you’ve used a learning rate that’s too high it’s hard to get it to come back to sanity.

Also, looking at your screenshot, lr_find doesn’t do anything by itself – you need to choose a learning rate and pass it to fit_one_cycle.

The enormous magnitude of the difference between the resnet34 and resnet50 accuracy should be a red flag that something wasn’t right with the resnet34 training.


I did pass it to fit_one_cycle, and the results got worse.

I removed normalization and started from scratch again, this time resnet34 got a bit better:


learn.fit_one_cycle(2, max_lr=slice(1e-5,1e-3))

Total time: 00:08

epoch train_loss valid_loss error_rate
1 0.002271 0.514406 0.175182
2 0.002753 1.060810 0.270073

still resnet 50 gives super results again, so something is still wrong.


learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-1))

Total time: 00:20

epoch train_loss valid_loss error_rate
1 0.045139 2.985252 0.138686
2 0.258896 0.000000 0.000000
3 0.167193 0.021406 0.007299

The suggestions I got were to use more images, use less epochs, use dropout and use regularization, I am planning to try those.

@kodzaks You need to start over with a fresh learner.

As @yeldarb has said,

So I would recommend to re-run everything. Remember to find learning rate and use it from the start.

1 Like