Resize images

Very simple question: How do I resize an image loaded with open_image? The source code shows that there is a resize function, but it either takes a single int or a TensorImageSize? My image isn’t square and I want to do something incredibly simple like:

img = open_image("image.jpg")

But very surprisingly this doesn’t work. What should I input?

You can do it easly with fastai transforms. Take a look at this. Specify size as a tuple and set the padding mode to whatever suits you (most likely reflection).

Not sure if Fastai does operations inplace. Did you try img = img.resize(300,200) ?

Sorry, I was showing commands on Jupyter Notebook. The problem isn’t if the operation can be done inplace or not, it’s that resize only accepts one int or a TensorImageSize, which is a tuple of size 3. This means that unfortunately what one would think a very standard call like img.resize(300,200) returns an error.

@mgloria I’m wondering why I’d have to use a transform if there is a resize function available in the Image class, but please show me how I can define a transform to accomplish the resizing operation as specified (i.e. no padding, squishing, cropping etc. just standard resizing/scaling)

You need to use the datablock api (get_tfms is not that flexible). This will give you more flexibility. See example code:

    src = ImageList.from_folder(path).split_none().label_from_folder()
    tfms = get_transforms() # or tfms=None if none are needed
    size=224 # size=(224,224) or (400,224)
    data = src.transform(tfms=tfms, size=size, resize_method=ResizeMethod.SQUISH).databunch(bs=bs, num_workers=4).normalize()
Look into the padding and resize methods to make sure your are getting the desired results. Note that if you really want to do no squishing you can set ResizeMethod.NO 
You fill find the detailed info here:

Note: if you just want to resize images (no data augmentation at all) the link above also tells you how.

Thanks for the reply, but again, I just want to resize a single image for testing (and to do inference, not training, but that’s not really relevant here), so I don’t want/need to use the datablock API. I have a single test image that I read from a file using the open_image function. The object has a resize function that allows me to resize it to a square using a single int, but it doesn’t provide a standard two-parameter format for scaling with specified width and height. I imagine I can create a TensorImageSize tuple for that, but it’s not clear how (it has 3 elements).

Ah, okay. Got you now. Let me try…

Do it like this:

from PIL import Image
im1 ='valley.png')
width = 50
height = 42
im2 = im1.resize((width, height), Image.NEAREST)

You’re using a PIL image, not a object.

After looking more into the code of the Image class, I found the solution:

img.resize(torch.Size([img.shape[0],new_height, new_width]))

Not very intuitive. :frowning:


Thanks this helped!

For recent fastai versions (tested with 1.0.60), calling resize directly with an int should work (given source code it translates to resize(torch.Size([img.shape[0],new_size, new_size])) behind the scenes.

Besides, if resize does not seem to have any effect, try calling the refresh method:


Yeah, not very intuitive.


Thanks a lot!!! .

I was looking exactly this on how you resize the image using ImageList

Thank you! This worked perfectly.

can anyone help me I’m using fastai to process user uploaded images, my model will only accept a specified dimension of image (512,384) and so I’m trying to resize the user uploaded image, what’s wrong with this? I’m using fastai 1.0.60

def predict_single(img_resize):
‘function to take image and return prediction’
prediction =

image_resize =

probs_list = prediction[2].numpy()
return {
‘category’: classes[prediction[1].item()],
‘probs’: {c: round(float(probs_list[i]), 5) for (i, c) in enumerate(classes)}