A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

@muellerzr which libraries should we install in colab to have the latest version of the library, I found from a post of @Srinivas the following but I struggle to understand what he/you mean:

I see that there is a difference in the installation commands between the two nbs.
Older version says:
import os
!pip install -q fastai2 fastcore torch feather-format kornia pyarrow wandb nbdev fastprogress --upgrade
!pip install torchvision==0.4.2
!pip install Pillow==6.2.1 --upgrade
os._exit(00)

Seems to fix torchvision and Pillow versions (note that error reported with PILLOW_VERSION is only in Pillow 7.0.0) so this could be the cause???. Do not know whether fixing torchvision version is important as well??

Newer install says:
import os
!pip install -q torch torchvision feather-format kornia pyarrow Pillow wandb nbdev fastprogress --upgrade
!pip install -q git+https://github.com/fastai/fastcore --upgrade
!pip install -q git+https://github.com/fastai/fastai2 --upgrade
os._exit(00)

In addition two more minor questions to our study group: 1) what does the syntax git+https above mean? 2) is there a way to run the fastai2 documentations notebook easily in colab? I believe it would help me a lot to understand it better rather than just looking at the executed code. How are you guys doing it? Thanks a lot!! :smiling_face_with_three_hearts:

Hi @mgloria that’s exactly what I think it’s happening, but I still have this doubt, can we pass anything while creating dataloader as source as long as we handle it using get_items, get_x and get_y?

  1. The syntax git+https is nothing but git+{repo url}, basically what it does is clones the repo and install it as python library. Exactly similar to doing a simple pip install but pip install git+{repo url} allows you install the most updated copy from github. Depending on when you install pip install git+{repo url} can be broken, while pip install installs the last stable release.

  2. You can simply open https://colab.research.google.com/ click on Github tab, paste the notebook github url, and press the search button next to it.

2 Likes

Hi! I have written a short article on medium on how to deploy your model to heroku. Works great. I have also used Heroku because it’s free (at least if you just want to do a demo or prototype): https://medium.com/analytics-vidhya/put-deep-learning-image-classifier-in-action-a956c4a9bc58

The two scripts do not do the same thing. I saved time by installing direct versions of torch and torchvision as else it would install 1.4.1 then downgrade. Yes. This is the environment you need to stay in if you want to use the library. It will break if you try the most recent installations of everything.

On documentation, run the same script for installs of the lib then just run them.

And then everything @vijayabhaskar said :slight_smile:

Also: in regards to where to keep an eye out for those install changes, it’s this thread that I update

Thanks a lot to both! For the ones not following all the posts. The required install is:

import os
!pip install -q feather-format kornia pyarrow wandb nbdev fastprogress fastai2 fastcore --upgrade
!pip install torch==1.3.1
!pip install torchvision==0.4.2
!pip install Pillow==6.2.1 --upgrade
os._exit(00)

just as indicated in the course notebooks :wink:

3 Likes

If I’m creating a manual list of transforms via a Pipeline. how to I add cuda to that? I tried doing lambda x: x.cuda() as one of the transforms, but this this gets applied to PILImage and an error is throw, I think this has to do with the order of the transforms

EDIT: I also tried to put TensorImage into the Pipeline by doing TensorImage.new with no success

@lgvaz what is your pipeline defined as? Can’t do much without code :wink: do you call ToTensor()

This is my current pipeline:

pipe = Pipeline([PILImage.create, ToTensor(), IntToFloatTensor(), Normalize.from_stats(*imagenet_stats, cuda=False)])

I want to add TensorImage and Cuda to that

Try looking here: https://dev.fast.ai/core.transform.html#Pipeline

May answer a few questions. Not for CUDA though (though it may that too)

1 Like

Does anybody know a good reason why we using in the tutorial notebook the non-stratified K-split version?

from sklearn.model_selection import KFold

Would it not be better to use stratified splits to make sure that all classes are represented in the training and validation set? e.g. in case of imbalanced datasets we may have the problem that one of the minority classes does not appear in the training examples and this would then raise an error.

1 Like

Basically I was only able to get the regular KFold working @mgloria. If anyone can figure out how to get Stratified working instead that would be great :wink:

1 Like

I want to catch up with your walk with fastai2 but I also get the “cannot import name 'PILLOW_VERSION' from 'PIL'” error with torch 1.3.1. and pillow 7.0.0.

Is there another workaround for that problem (I didn’t find via the forum search)?

You need Pillow 6.2.1, as the directions state above. Basically they changed how to grab the pillow version between those two.

1 Like

See here: https://github.com/python-pillow/Pillow/issues/4130

PILLOW_VERSION has been removed. Use __version__ instead.

https://pillow.readthedocs.io/en/stable/releasenotes/7.0.0.html#pillow-version-constant

So you can modify the torchvision file to use __version__ instead of PILLOW_VERSION, or you can downgrade PIL

2 Likes

Thank you, I was just rereading the posts in detail and I was able to solve it in my conda env with conda install Pillow==6.2.1.

1 Like

Yes some things do get buried under this thread sadly :frowning: I can’t think of a good way to go about doing that for the big things. I know you can do a summary of the thread that goes by likes but that’s about it.

1 Like

We could add the note to the top? However, maybe that changes soon and is not needed anymore. People are super helpful here if this gets asked again, so we will solve it anyway. :slight_smile:

Thank you for the fast help, @muellerzr & @morgan !

2 Likes

I tried a bit on my own now but I am also getting into errors. Nevertheless, I am sharing since I believe this brings it one important step further: