How to predict using learn.fit_one_cycle model

I trained a model using the MNIST_SAMPLE (3s and 7 digits).

path = untar_data(URLs.MNIST_SAMPLE)
dls = ImageDataLoaders.from_folder(path)
learn=cnn_learner(dls,resnet18,pretrained=False,loss_func=F.cross_entropy,metrics = accuracy)
learn.fit_one_cycle(1,0.1)

Now I wish to test this model on a single image.
I upload the images:

threes = (path/'train'/'3').ls()
threes_data = [Image.open(i) for i in threes]
tensor_3 =[tensor(i).float()/255 for i in threes_data]
img = tensor_3[0]

Prediction:
learn.predict(img)

Getting the following error message:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
154 def with_events(self, f, event_type, ex, final=noop):
ā€“> 155 try: self(fā€™before
{event_type}ā€™) ;f()
156 except ex: self(fā€™after_cancel
{event_type}ā€™)

30 frames
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[1, 1, 28, 28] to have 3 channels, but got 1 channels instead

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
TypeError: expected Tensor as element 0 in argument 0, but got NoneType

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/fastcore/utils.py in range_of(x)
    209 def range_of(x):
    210     "All indices of collection `x` (i.e. `list(range(len(x)))`)"
--> 211     return list(range(len(x)))
    212 
    213 # Cell

TypeError: object of type 'NoneType' has no len()

It seems there needs to be three channels but you have given one during inference?

1 Like

Thank you!
Somehow this slipped by me.
I tried to solve this by reshaping the img file.

img = img.view(1,1,28,28)

type(img),img.shape

(torch.Tensor, torch.Size([1, 1, 28, 28]))

But upon predicting I get

learn.predict(img)

KeyError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/PIL/Image.py in fromarray(obj, mode)
2679 try:
-> 2680 mode, rawmode = _fromarray_typemap[typekey]
2681 except KeyError:

KeyError: ((1, 1, 28, 28), ā€˜<f4ā€™)

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last)
25 frames
/usr/local/lib/python3.6/dist-packages/PIL/Image.py in fromarray(obj, mode)
2680 mode, rawmode = _fromarray_typemap[typekey]
2681 except KeyError:
-> 2682 raise TypeError(ā€œCannot handle this data type: %s, %sā€ % typekey)
2683 else:
2684 rawmode = mode

TypeError: Cannot handle this data type: (1, 1, 28, 28), <f4

just a rookie question here (correct me if I am wrong)
When predicting, shouldnā€™t you pass an actual image instead of the image converted to a tensor?

1 Like

Can you predict by looping through the images in a folder? If so, like @jimmiemunyi says, can you try: learn.predict(path/ā€˜test/img.jpgā€™), where ā€˜img.jpgā€™ is the name of the image you are trying to predict?

Two points:

  1. Are you using a ResNet Model? If not, can you tell what model you use?
  2. On the error:
    KeyError Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/PIL/Image.py in fromarray(obj, mode)
    2679 try:
    -> 2680 mode, rawmode = _fromarray_typemap[typekey]
    2681 except KeyError:

KeyError: ((1, 1, 28, 28), ā€˜<f4ā€™)

I think the issue is with Pillow not being able to read the image. See this question: https://stackoverflow.com/questions/60138697/typeerror-cannot-handle-this-data-type-1-1-3-f4

Hi @jimmiemunyi the model was trained on a tensor of the image so I should probably pass it the same tensor shape.

1 Like

Hey,

Tried to pass an image file but it didnā€™t works. See attached showing first the image I used (showing it exists) and the error.

  1. Iā€™m using a resnet model
    learn=cnn_learner(dls,resnet18,pretrained=False,loss_func=F.cross_entropy,metrics = accuracy)

  2. The error you mentioned seems to be related to np while Iā€™m using torch tensor. I tried multiplying by 255 but that didnā€™t help

So frustrating, youā€™d expect that predicting would be the easiest partā€¦

@jimmiemunyi - this is also with regards to your suggestion.

thanks for clarification. hope you find a solution

1 Like

So, the input type needs to be in one of the formats shown in the screenshot. The error must have some simple fix, donā€™t be frustrated :slight_smile:

Can you tell me the source of the dataset? I will train with FastAIā€™s ResNet model and come back if someone has not resolved your query already!

1 Like

@karthikrI appreciate your support. It encourages me to keep looking for a solution.

Hereā€™s the complete code from the lesson (ran on Google Colab):

!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastai.vision.all import *
from fastbook import *
matplotlib.rc('image', cmap='Greys')

 path = untar_data(URLs.MNIST_SAMPLE)
dls = ImageDataLoaders.from_folder(path)
learn=cnn_learner(dls,resnet18,pretrained=False,loss_func=F.cross_entropy,metrics = accuracy)
learn.fit_one_cycle(1,0.1)

The only progress I made so far was to be able to get predictions for the training data using the following:

(tr,vl)= dls.one_batch()
learn.get_preds(dl=[(tr,vl)])

Of course the whole idea of a code is allow a prediction from data outside the training and test.
This I still canā€™t do.

Let me know if you manage to figure this out.

I tried debugging your code and it looks like the loss_func you pass, could be the issue.

Please change this line:
learn=cnn_learner(dls,resnet18,pretrained=False,loss_func=F.cross_entropy,metrics = accuracy)

to one of the below:

  1. When giving the loss_func: learn=cnn_learner(dls,resnet18,pretrained=False,metrics = accuracy, loss_func=CrossEntropyLossFlat())

  2. When not giving the loss_func:. learn=cnn_learner(dls,resnet18,pretrained=False,metrics = accuracy)
    #You do not have to specify a loss_func as Fastai knows by deafult. You can print it confirm
    print(learn.loss_func)

  3. Inference: Please pass an image not in your training, below is for example only

If an Individual item
#Input is an pathlib.PosixPath type
threes = (path/ā€˜trainā€™/ā€˜3ā€™).ls()
learn.predict(threes[0])

#Input is an image, in a folder:
learn.predict(ā€™/images/three.pngā€™)


If a batch (I got this from another thread in the forum)
dl = learn.dls.test_dl(threes)
inp, preds,_,dec_preds = learn.get_preds(dl=dl, with_input=True, with_decoded=True)
full_list = learn.dls.decode_batch((*tuplify(inp),*tuplify(dec_preds)), max_n=18) #18 inputs

print(len(full_list)) #This gives 18, the same number of images we passed as input above

#Get the Preds from the list of tuples:
numbers = [ num[1] for num in full_list]

#This is the ā€˜numberā€™ predicted by the model. Since we fed all threes, we should get those
print(numbers)

3 Likes

It Worked :fireworks: :fireworks: :fireworks:
You rock big time!

image

If I understand correctly the change was that you replaced loss_func=CrossEntropyLosst() with loss_func=CrossEntropyLossFlat()

Can you please explain how this made a difference and more importantly how you found that was the problem and how you found the solution?

You are amazing. Thank you so much!

P.s.itā€™s very strange that the problem was with the learner code as itā€™s copy pasted from Jeremyā€™s lessonā€¦

1 Like

Glad it worked.

Since your predict method was similar to what I use, I went backwards to see what could be the change. I looked at the loss functions in GitHub and noticed they were all ā€˜flatā€™. I felt changing your loss to something that uses ā€˜flatā€™ would work and it did (note, you can still write your custom loss functions)ā€¦

While I was debugging the code to understand why, I noticed another thread:
Image Classifier learn.predict(img) and Image Classifier learn.predict(img). It looks like FastAI v2 needs the ā€˜flatā€™ data, which your loss function would not provide.

2 Likes

Not in a million years would I have figured this out.
I learned so much from this interaction.

Many (Many) thanks!

1 Like