Speed up image loading

I am working on a project that requires inferencing on a Raspberry Pi. I noticed there that I get a HUGE performance improvement in loading images when using this:

    x = PI.open(fn)
    x.draft('RGB',(224,224))  
    x = pil2tensor(x,np.float32)

instead of the regular:

    x = Image.open(fn).convert('RGB')
    x = pil2tensor(x,np.float32)

Image load times drop from 1,6 seconds per image to just 0,13 seconds per image when using the draft function and directly loading in the required image size!

So now I created my own open_image function, but I can imagine this might also provide a general performance improvement in fast.ai for training.

Whenever this might be a good idea, I’m happy to help with creating a Pull Request of course. What do you think?

1 Like

One thing I notice is you crop right away (the 224,224). What’s the time if you maintain the original size?

I don’t see cropping happening. Now I’m playing around with the draft function in a Jupyter notebook, don’t fully grasp the inner workings yet.

If I use draft('RGB', (300,300)) on a (5920, 3416) image the result is a (740, 427) image… But it looks good and is REALLY much faster :slight_smile: I’ll dive into it some more. By the way, reading the Pillow docs draft is only available on jpg, something to keep in mind.