I would like to use a size of 128 for my images. Therefore I create a databunch like this:
data = (ImageDataBunch.from_name_func(path, fnames, label_func=get_labels, ds_tfms=tfms, size=128, bs=bs) .split_by_rand_pct(seed=1) .label_from_func(get_labels) .databunch(bs=bs))
I could not find where to pass the
size parameter, I found on other code snippets that it was in
ImageDataBunch.from_xx() functions. I know that is it taken in account somewhere thanks to
kwargs but I could not find where.
So my question are:
- with this code, is the databunch actually passing images of size 128 to the NN?
- When I will have trained my NN, I’d like to retrain using a size of 224. Will I have to recreate a model with the new
ImageDataBunch? If yes, how can the new NN benefits from the previous training? By loading the old weights into it ? How ?
Thanks in advance!