Progressive resizing in fastai v1


(Bruce Yang) #1

Hi guys, just want to confirm here, are the code below still the valid way to do progressive resizing in image classification with fastai v1:

learn.load(‘model-trained-with-size-64’)
learn.data = get_data(sz=128) # get_data is a little helper function to return an image DataBunch
learn.freeze()
learn.fit_one_cycle(3, lr, div_factor=40)

lrs = np.array([lr/9,lr/3,lr])
learn.unfreeze()
learn.fit_one_cycle(3, lrs, div_factor=40)

I’m asking because my model is currently training on size 128, but I noticed that the losses on the 3 epochs with frozen model (all but last layer) are higher than the losses with unfrozen model with size 64, but better than frozen model with size 64.


#2

This should work yes. A way to be sure would be to load one batch x,y = next(iter(learn.data.train_dl)) and look at the size of x.


(RobG) #3

Progressive sizing looks OK.
Are you passing in the lrs in the right form? If you pass in a max_lr then fit_one_cycle creates discriminative lrs, see lr_range in docs.
Also beware pct_start for one cycle is not a percentage like 10 in v0, but 0.1 as a fraction for v1. I’d have preferred it stay an integer percent with that name, but not a big deal.
Call learn.recorder.plot_lr(show_moms=True) to check the lr schedule is what you intended.


(Bruce Yang) #4

Size of images in new learn.data does get scaled up to the expected size. Thanks


(Bruce Yang) #5

Thanks for the insights. fit_one_cycle does take a list of learning rates as max_lr.
Below line of code comes from right above doc of lr_range: :grinning:

learn.fit_one_cycle(1, max_lr=(1e-4, 1e-3, 1e-2), wd=(1e-4,1e-4,1e-1))


(Jeremy Howard) #6

That’s only true if you pass in a slice object, or something listy.