In the previous version of the course I was able to do data.resize to quickly change training size. How can I do this with v1.0 of fastai?
You can use
vision.transforms to resize images (with a few different options eg crop vs squish). You can pass this when you create your
The docs for that are here:
@yeldarb So you can no longer do data.resize and instead need to now create a new ImageDataBunch? Also, is there any way to print the current input size of an ImageDataBunch object?
I believe you can change it after creation by calling
transform with a new set of transformations on the
You can see the size of a batch by calling
data.one_batch().size(). This will return something like
[32, 3, 224, 224] meaning you have a batch of 32 images each having 3 channels with a width and height of 224.
If you run something like this:
data.train_dl.transform(tfms=get_transforms(), size=400) you’ll see that the size of that tensor changes to
[32, 3, 400, 400].
Ah, thank you.
It’d be nice if there was something like
data.sz that returned