Progressive resizing, model death

Hi,

I’m attempting to use progressive resizing on a fine-tuned resnet-50 that has trained without incident. I’ve tried a couple of different ways, and without exception when I move to the larger image sizes, the training “falls apart” – Losses increase, metrics degrade, etc. I’ve tried more aggressive learning rates, lower ones, etc. Stymied. Any suggestions?

1 Like

I’m having the same issue, any luck since then?

Isn’t it the expected behaviour?

I’m having the same experience too.

If you want to know your model really improves, you can validate your dataset with the older smaller image and see how is it performing.

Hey, I don’t know why the model dies but I’m sure it’s not the expected behavior. I’ve implemented progressive image resizing and it did improve my model’s performance.

Hey @dipam7 really liked your Kernel on Kaggle! Could you explain why you choose this image sizes? In my case, I am using resnet50 with original image size 12001600 and I am making them 320320 as Jeremy does… but I was wondering which sizes (smaller but also bigger) could I try.

Hi @mgloria, I believe it should be a good idea to try image_size / 4, then image_size / 2 and finally image size. It also depends on the application and the kind of things the neural net detects. Note that images in fastai are square and not rectangular. Try a few experiments. Try doing a lot of resizing (2 times, 3 times, 4 times) and different sizes. But powers of 2 should give the best results.

I’ve never heard anyone go beyond the image size. If you want to go bigger, I’m guessing you’d have to use a GAN or U-net to improve the resolution and then use that. Again, it’s just the first thing that comes to my mind, I’m not sure if it can be done or is the right way.

Hope this helps, Cheers!

1 Like

Thanks a lot @dipam7, these rule of thumbs are really what I am looking for!!
What about the batch size? Which ones do you use? I understand that a big batch will lead to faster training but I may also run out of GPU memory. I was wondering however if it can negatively impact the training, the bigger the batch, the less since weights are less often updated. So I am not sure whereas I should try to feed a batch as big as my GPU allows or rather stick with the sizes Jeremy uses in the course. Any experience with it?

So the rule of thumb for that is when you double the image size you half the batch size. About bigger batch sizes, things happen in mini - batches in PyTorch so it isn’t much of a concern. Jeremy says, “The best way to find out if you should do bla, is to do bla and see it” :slight_smile:

1 Like

hey, could you try this.

after increase image size. Freeze model and finetune a few epoch first before unfreeze all layer.

model death mean acc drop or can’t train?

As you pointed out, the amount of increase in each step matters too it seems to me. I tried to increase image size from 224 to 384 on the food-101 dataset, performance increased from 80% to 87.8% for top-1 accuracy.