I am working on a project that requires inferencing on a Raspberry Pi. I noticed there that I get a HUGE performance improvement in loading images when using this:
x = PI.open(fn) x.draft('RGB',(224,224)) x = pil2tensor(x,np.float32)
instead of the regular:
x = Image.open(fn).convert('RGB') x = pil2tensor(x,np.float32)
Image load times drop from 1,6 seconds per image to just 0,13 seconds per image when using the
draft function and directly loading in the required image size!
So now I created my own
open_image function, but I can imagine this might also provide a general performance improvement in fast.ai for training.
Whenever this might be a good idea, I’m happy to help with creating a Pull Request of course. What do you think?