FastAI throwing a runtime error when using custom train & test sets

Hi.

I’m working on the Food-101 dataset and as you may know, the dataset comes with both train and test parts. Because the dataset could no longer be found on the ETH Zurich link, I had to divide them into partitions < 1GB each and clone them into Colab and reassemble. Its very tedious work but I got it working. I will omit the Python code but the file structure looks like this:

Food-101
      images
            train
               ...75750 train images
            test
               ...25250 test images
      meta
            classes.txt
            labes.txt
            test.json
            test.txt
            train.json
            train.txt
      README.txt
      license_agreement.txt

The following code is what’s throwing the runtime error

train_image_path = Path('images/train/')
test_image_path = Path('images/test/')
path = Path('../Food-101')

food_names = get_image_files(train_image_path)

file_parse = r'/([^/]+)_\d+\.(png|jpg|jpeg)$'

data = ImageDataBunch.from_folder(train_image_path, test_image_path, valid_pct=0.2, ds_tfms=get_transforms(), size=224)
data.normalize(imagenet_stats)

My guess is that ImageDataBunch.from_folder() is what’s throwing the error but I don’t know why its getting caught up on the data types as (I don’t think) I’m supplying it with any data that has a specific type.

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
You can deactivate this warning by passing `no_check=True`.
/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py:262: UserWarning: There seems to be something wrong with your dataset, for example, in the first batch can't access these elements in self.train_ds: 9600,37233,16116,38249,1826...
  warn(warn_msg)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/IPython/core/formatters.py in __call__(self, obj)
    697                 type_pprinters=self.type_printers,
    698                 deferred_pprinters=self.deferred_printers)
--> 699             printer.pretty(obj)
    700             printer.flush()
    701             return stream.getvalue()

11 frames
/usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in affine(self, func, *args, **kwargs)
    181         "Equivalent to `image.affine_mat = image.affine_mat @ func()`."
    182         m = tensor(func(*args, **kwargs)).to(self.device)
--> 183         self.affine_mat = self.affine_mat @ m
    184         return self
    185 

RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out
1 Like

I’m running lesson1 notebook in Colab and go the same error. It looks like the version running on Colab maybe not be compatible w/ FastAI? Appreciate the help!

https://colab.research.google.com/github/fastai/course-v3/blob/master/nbs/dl1/lesson1-pets.ipynb#scrollTo=SEMlDH19duLT&line=2&uniqifier=1

Running cell w/
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs).normalize(imagenet_stats)

error
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py:262: UserWarning: There seems to be something wrong with your dataset, for example, in the first batch can’t access these elements in self.train_ds: 5478,1218,2792,5389,3016…
warn(warn_msg)

It seems that some issues with torch that is used in colab

Forum Try to install specific version of torch in your colab before run fastAI python code

!pip install "torch==1.4" "torchvision==0.5.0"
4 Likes

Thank you! that do the trick!

So, in this case, it will be old version of pytorch, will it?

That seems to be the case.

The problem seems to be with a single function from pytorch, nn.Upsample, used by the RatioResize transform in fastai2’s batch image augmenter, that changed it’s default behaviour between pytorch versions 1.4.0 and 1.5.0. Luckily, you can get pytorch 1.5.0 to behave like 1.4.0 by simply passing it an additional parameter, called “recompute_scale_factor=True” when you call it. In practical terms this means updating the fastai file augment.py (found in /fastai/vision) to add this option. On my system, I did that by uninstalling the pip version of fastai (pip3 uninstall fastai2) checking out an editable version of fastai with “git clone https://github.com/fastai/fastai2” and editing line 289 of the file ~/fastai2/fastai2/vision/augment.py from “x = F.interpolate(x, scale_factor=1/d, mode=‘area’)” to x = F.interpolate(x, scale_factor=1/d, mode=‘area’,recompute_scale_factor=True) , and installing the patched fastai2 with "pip install -e “.[dev]”.

3 Likes

I was able to apply this same fix to the fastai (v1) by making this edit to the file …/fastai/fastai/vision/image.py on line 540.
It worked a charm… Thanks!

1 Like

Hi, Onur.

Thanks for this. I have a question.

Will I have to execute this line installing torch 1.4 whenever I open and run the notebook?

If you’re using Colab, then yes

Thanks Onur, is there any workaround? Using any other jupyter notebook?

I haven’t tried tbh. I’ve only been using Colab.

1 Like

It worked for me. Thank you

1 Like