AttributeError: 'TensorPoint' object has no attribute 'img_size'

Hi all,

I followed the course and I am trying to run some of the examples in the notebooks for my own projects.

More specifically, I am working on the 06_multicat regression example, where Jeremy gives images and coordinates as inputs for a regression problem.

My project is about doing exactly the same, but only I do it in a google colab notebook (as I can mount a google drive and acces my data very easily).

I can thoroughly go through each line of the code and replicate it without any issues, I can create the DataBlock, I can see a batch, I can observe the shapes of the mini-batches, and I can even train the model.

All good until I want to see the results. There I run the last cell:

learn.show_results(ds_idx=1, nrows=3, figsize=(6,8))

and I get the following error:

AttributeError: ‘TensorPoint’ object has no attribute 'img_size’

I am quite shocked as there’s barely no information about such an error when I google it.

As a matter of fact, there’s only one difference I observed between my results and the ones in the course’s notebook:

when Jeremy runs the cell to see the first row of the dependent variable yb[0] (being xb, yb =dls.one_batch()), he obtains:

tensor([[0.0856, 0.4734]])

when I run the same command for my data, I get:

TensorPoint([[-0.1336, -0.5072]])

Why TensorPoint instead of tensor? Specially when the command xb.shape, yb.shape outputs:
(torch.Size([64, 3, 256, 256]), torch.Size([64, 1, 2])) -exactly as in the notebook of the course-

Any help would be much appreciated

I am facing the same error? Were you able to solve it?

got the same error too … anyone able to crack it?

also. would anyone know how to get the coordinates for the predicition instead of getting the image w the red dot?

You can use
res = model.predict(test_img_path)
which will give you the predicted point of one image.
Otherwise, if you want to get the prediction for every image of your validation set you could use
preds, y = model.get_preds()
where preds are the predictions and y the right result from the dataset.
Hope that helps :slight_smile:

cheers man, also would you happen to know how to fix the issue above?

I unfortunately don’t know how to properly fix that, I just used a pretty ugly fix:
In line 244+245 of fastai/fastai/vision/ I set sz = None like that:

244   def _get_sz(self, x):  
255          sz = None # x.img_size

that is definitely no nice solutions but at least then everything works without errors


cheers man appreciate it

Many thanks for the report and to @kilianft for the workaround. There’s a fix now available in the latest fastai on pypi and conda.


Wow! Many thanks to you all that contributed solving my question.

With an upgrade of fastai it works like charm also in Google Colab notebooks.

Thanks Jeremy for all the effort and dedication you put in the videos and notebooks. It is a pleasure to follow the courses, and I guess I speak for many when I say that after following the course one feels empowered to manipulate the code and try new things for slightly different projects.


Great! Just hit that problem yesterday, I’m so happy that @jeremy has solved it just 1d before! Feeling lucky!

thanks Sir, I’m guessing this would work on paper space too yes?

Using :
!pip install -Uqq fastbook
import fastbook
from fastbook import *
from fastai.collab import *
from fastai.tabular.all import *

Hi how did you upgrade your fastai ? i tried but my issue isnt being resolved could you share the code you used to upgrade and initialize your worksbook ?

Run !pip install fastai --upgrade

1 Like

Hi thank you !! Tried it, however it did not work wonder where im going wrong … will probably try the other solution he suggested too

Hey how did you access fastai/fastai/vision/ im using jupyter on paperspace gradient and can’t find a directory called fastai

First you need to start:

!pip install fastai --upgrade

After this:

import fastbook
from fastbook import *

And only then run your code. May be this help you.

1 Like

Hi will try this, thank you very much !!!