FastAI throwing a runtime error when using custom train & test sets

Thank you so much your solution worked. Very grateful.!

Legend! That worked, so well!

Hi folks, I’m wondering if anyone can help me. I have been using the RetinaNet model on my own dataset, for some time the model worked great then all of a sudden my predictions and mAP score was terrible. I ran the original notebook again on the same data, nothing changed and when originally I got an mAP of 39% this time I got an mAP score of 8% after running the notebook several times on the same data the results go from bad to worse , it’s inconsistent. I ran the notebook again the other day and got a score of 40%. I rerun the exact same notebook again the next day and got a score of 2%. If anybody has any ideas on what’s going on, I would be very greatful for any help. Thank you.

Thanks for the tip @oo92! I was struggling to even do that, but I realized I had to install torch 1.4 before installing fastai, I guess pytorch 1.5 is installed by default in google collab.
This is how it looks my first code section in the notebook to make it work:

!pip install "torch==1.4" "torchvision==0.5.0"
!curl -s https://course.fast.ai/setup/colab | bash

Hope that helps someone :slight_smile:

2 Likes

Thanks, it worked for me two (i’m still in fastai v1, trying to follow lesson 1 2019 on Youtuve, but with latest pytorch). Thanks btblueskies too.

Hi @muellerzr
PR400 seems to have gotten a bit stuck in the review process. What is the best way to kick it off again?

Just wait patiently. Jeremy is the only one really approving PR’s from what I can see, so it’ll get approved when it does. I have no hand in any of that. (I have my own PR going)

1 Like

This problem still exists when I running the GAN code just now. Hope this can be fixed soon, waiting…

I am new and just started my course at fast.ai.
i am getting the same error but i am not using colab. i am running into the same error even after downgrading my pytorch. please help

Hi! I just started the course, too, and was getting the warning. Seeing how the pull request is waiting for review, I’m using this workaround to suppress the warning message until the fix. That way when I show people I still look like I know what I’m doing :wink:.

import warnings
warnings.filterwarnings('ignore')

source: https://stackoverflow.com/questions/9031783/hide-all-warnings-in-ipython

Hope that helps some people!

The issue is with setting the size to 128 when you transform with the latest version of PyTorch so you can just try removing “size = 128” and let it take default. This worked for me. Thanks
“data = (SegmentationItemList.from_folder(path_img)
.split_by_rand_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch())”

If you have a newer version of pytorch then you have two options.
Your options are

  1. First is to download the compatiable veresion by :

pip install "torch==1.4" "torchvision==0.5.0"

  1. Another thing would be to set recompute_scale_factor=True in line 540 of fastai/vision/image.py

Replace F.interpolate(x[None], scale_factor=1/d, mode='area') by F.interpolate(x[None], scale_factor=1/d, mode='area', recompute_scale_factor=True)

brother is there is easy way to so because its colab

For now, the easiest thing to do is to just ignore the warning, or use @gregadams solution above (which is just adding two easy commands to your notebook). The PR that fixes this warning has been submitted and once it is merged, this warning will go away in colab. Colab still works as it is now, it just throws a warning, so it should not stop you from doing any model training or inference, even with the warning. If your training stops, something else is wrong.

I agree

yup thanks

Just curious. Can we change the code so that it runs on newer version of torch? I mean i think maybe it would be better to upgrade the code rather than use outdated pytorch

1 Like

This is what worked for me:

conda install pytorch=1.4.0 torchvision

so the same solution as mentioned above, just done via conda.