ImageDataBunch create with size=128

Hi,

I would like to use a size of 128 for my images. Therefore I create a databunch like this:

data = (ImageDataBunch.from_name_func(path, fnames, label_func=get_labels, ds_tfms=tfms, size=128, bs=bs)
                      .split_by_rand_pct(seed=1)
                      .label_from_func(get_labels)
                      .databunch(bs=bs))

I could not find where to pass the size parameter, I found on other code snippets that it was in ImageDataBunch.from_xx() functions. I know that is it taken in account somewhere thanks to kwargs but I could not find where.

So my question are:

  • with this code, is the databunch actually passing images of size 128 to the NN?
  • When I will have trained my NN, I’d like to retrain using a size of 224. Will I have to recreate a model with the new ImageDataBunch ? If yes, how can the new NN benefits from the previous training? By loading the old weights into it ? How ?

Thanks in advance!

You can pass the size parameter in the .transform part. E.g.

.transform(get_transforms(), size=size)

And sure, you can use the model later with different sizes, as the image size is independent of the convolution sizes. You have to create a new ImageDataBunch, pass it to a learner and then load the model you have trained on different sizes via learner.load(‘model’).

1 Like