Predicting on single image using Fastai Model

Hi! Quick q. I went back to Lesson 1 and wanted to get the dog/cat predictions on a new image using the model we built (learn, from the 12th notebook code block). Let’s say I want to predict the score for this one I found on the internet.

!wget dog.png 
img = plt.imread('dog.png')

12 PM

I wasn’t totally sure how to predict on a new image but I played around with the code and tried out:

But then I got this strange error:
RuntimeError: running_mean should contain 256 elements not 1024

Thoughts on how to debug this? Am I using the wrong command to predict? Let me know if this isn’t reproducible.


You have to pass the image through the transformations before you feed it into the classifier. This was discussed at length here: How do we use our model against a specific image?


Just clarify since the linked thread is a little bit unclear about the syntax on how to predict on a single image:

trn_tfms, val_tfms = tfms_from_model(arch,sz) # get transformations
im = val_tfms('image.png'))
learn.precompute=False # We'll pass in a raw image, not activations
preds = learn.predict_array(im[None])
np.argmax(preds) # preds are log probabilities of classes

(If you simply use Jeremy’s code linked in Phani’s reply, you’ll probably get a running mean error – that’s since we’re passing in an image and not a precomputed activation, as Jeremy later details here)



You’re right @thomzi12. I haven’t linked to the working code. I ran into the same problem and I went through the complete thread, fiddled with the code and figured it out in the end. Maybe I wanted to teach how to fish than merely giving away the answer :sweat_smile: Apologies!


Hello @thomzi12,

I use your code and get an error. Do you know what I type wrong ?

My code :

trn_tfms, val_tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1) # get transformations

fn = PATH+data.val_ds.fnames[775]
im =
im_val = val_tfms(im)

Notes :

  • When I type im, the image is well displayed in my jupyter notebook.
  • When I use tfms_from_model(arch, sz) like you, I get the same error.
  • The problem comes from val_tfms() and I got the same error if I use trn_tfms() but why ?

Apologies - there’s been a slight change. Instead of you should use open_image (you’ll need to git pull to get this function, since it’s new).


Thanks @jeremy ! It works well now.


Hi Jeremy, I have tried both of below in the dog ID code from lesson 2, in order to view one image, but neither work. Please advize :slight_smile:

fn = PATH + data.trn_ds.fnames[0]
img =; img
img = open_image(fn); img

Hi @jeremy and @thomzi12. I am trying to create a prototype application in my local environment just to play around with the model. I would like to try implementing it to detect images from my webcam. As my environment locally doesn’t have gpu and I don’t have cuda enabled, I setup torch without cuda.

learn.predict() works! and it was able to predict images in the valid/test folder even without gpu. But when I tried the method suggested in here:
I got an error that said:

“AssertionError: Torch not compiled with CUDA enabled”

I am wondering if there is a function I can call to predict single images without GPU? I am currently exploring the fastai library but I am not that good in python so I can’t find how learn.predict() was able to work vs the learn.predict_array() function.

I am not sure if this is still a scope of the lesson but I believe it would help me understand better how fastai library works. Thank you!

1 Like

Hi Thomas,
Can you please elaborate why learn.predict_array(im[None]) and not learn.predict_array(im) ?

1 Like

we use im[None] because everything passed to or returned from models is assumed to be mini-batch or “tensors” so it should be a 4-dimensional tensor.

If you run something like the following code:

trn_tfms, val_tfms = tfms_from_model(resnet34,sz)
im = val_tfms(open_image(f'{PATH}valid/cats/cat.10016.jpg'))

You can see that im[None] has an extra dimension:

im[None].shape: (1, 3, 224, 224)
im.shape: (3, 224, 224)


So None is what numpy uses as a convention to specify that you want to create a new axis. You can alternatively use np.newaxis (which is just a variable that equals None for now) for readability.

Thank you @jeongyoonlee.
Is it possible if you can tell me the reason behind it? Like I get it that nn.functional is being used but how is it helping adaptive layer in being readable by the onnx layer as adaptive layers are absent in onnx.
Instead of AdaptiveAvgPool2d and AdaptiveMaxpool2d I used other layers like AvgPool2D and MaxPool2D but it was giving an error of running_mean should contain 50176 elements not 1024. If you know anything about this error.

I have the same problem when I try to use pascal-multi object detection (Lesson 9), did you solve that problem?

You have to unfreeze the learner before calling image prediction. do a learner.unfreeze() and try calling learn.predict_array(image[None]))

As I understand we do unfreeze() when we want to retrain the model.

Your problem may be solved here, also check pytorch documentation

Hello everyone,
“list index out of range” error appears when I use learn.predict(image).
Do you mind helping me with this error?
The screenshot of this error is below:

Also, the image of my dataset tree is below :

Actually it’s a small part of my dataset just to showing it’s tree.
Thank you a lot.