A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

I tried but only see the effect in the method call (second row) with no adjustable parameters, when using partial it is like magnitude had no effect (first row).


Another question also related to image transforms. What do scale and ration in RandomResizedCrop mean? I struggle to make sense of them in the docu.

Hmmm. Will the warp section of this notebook run? https://github.com/fastai/fastai2/blob/master/nbs/09_vision.augment.ipynb

(Fun fact: these notebooks are how I get familiar with the code, not the documentation yet :slight_smile: ) I’ll look at the rest later when I can (unless someone else can answer your questions.) (@sgugger?)

For the time being you should install the dev version as ImageDataLoaders is still not quite in the newest pip version yet:

!pip install git+git://github.com/fastai/fastai2
!pip install git+git://github.com/fastai/fastcore

I am working through 02_MNIST nb on Collab. When I execute command

gpu_tfms = [Cuda(), IntToFloatTensor(), Normalize()]
I see the following error: name ‘Cuda’ is not defined

When I check whether a GPU is available through
torch.cuda.is_available() it returns true as I do have a GPU enabled on Collab

Is there a different way that I am missing to load/move data to GPU if I am using Collab

Cuda() is no longer a transform! (If you’re running the most recent version). It’s automatically assumed if the device (cuda) is available. Try that! (Just passing in Normalize and IntToFloat). I’ll look at it and adjust that notebook tommorow. Nice catch :slight_smile:

OK thx. Makes sense as I could not find Cuda() - so should have tried without it myself…

1 Like

All good :slight_smile: I wrote it down as a later change just forgot to add it in with the recent changes.

I’ve updated this change in the notebooks and adjusted the install directions, with a notice to run the install cell only once per session :slight_smile:

A related problem (it seems).:
After I define tfms, and gpu_tfms without Cuda() and then execute
dbunch = dsrc.dataloaders(bs=128, after_item=tfms, after_batch=gpu_tfms)

Then do dbunch.show_batch() I get this error:
RuntimeError: expected device cuda:0 but got device cpu

But that is strange for if I look in the dataloaders source
if device = None: device = default.device() AND
if I check in a separate cell in the nb what my default device is by typing default_device() I get
device(type=‘cuda’, index=0)

So what gives? Why does dbunch.show_batch() have a RuntimeError? What am I missing?

1 Like

Can you try running the most recent version? I’ll be able to look into this more later today

will do and report back.

OK. Now I am encountering an even more basic problem.
After I run the basic installation scripts when I run
from fastai2.vision.all import *
I get the error cannot import name ‘PILLOW_VERSION’ which points to this line of code:
from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
which seems to exist in this module:
[
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py]

SO says:

Pillow 7.0.0 removed PILLOW_VERSION , you should use __version__ in your own code instead

Any thoughts on this new error? It just popped up on the latest version of the nb I just downloaded. I did not see this error when I was working with the nb last night.

I see that there is a difference in the installation commands between the two nbs.
Older version says:
import os
!pip install -q fastai2 fastcore torch feather-format kornia pyarrow wandb nbdev fastprogress --upgrade
!pip install torchvision==0.4.2
!pip install Pillow==6.2.1 --upgrade
os._exit(00)

Seems to fix torchvision and Pillow versions (note that error reported with PILLOW_VERSION is only in Pillow 7.0.0) so this could be the cause???. Do not know whether fixing torchvision version is important as well??

Newer install says:
import os
!pip install -q torch torchvision feather-format kornia pyarrow Pillow wandb nbdev fastprogress --upgrade
!pip install -q git+https://github.com/fastai/fastcore --upgrade
!pip install -q git+https://github.com/fastai/fastai2 --upgrade
os._exit(00)

Yes. That’s an issue with the latest pillow. (Also yes I updated them again to explicitly put the torch version as it installs faster)

Torchvision is important because there’s a bug in the newest version that breaks fastai2, and we keep torch at 1.3.1 as a result

Does this help @Srinivas? :slight_smile:

Will try again and post though could be a bit later today.Thanks!

Yes - past the points where the problems occurred - so GOOD - for now. Hopefully will be able to go to completion. Thanks again.

There is an issue with DataBlock currently so for the time being I’ve revered to using an older version of fastai2, pre-dataloaders etc. The notebooks all show this change and install the correct version

Major change to DataBlock: Include the transforms in it. The notebooks will be updated shortly

Updated.

Hi @muellerzr ,

Really appreciate the notebooks you organized and videos you are creating.

Have been wondering how to go about running fastai2 on video data or data with multiple 2d slices of images with variable length. Meaning x is a set of 2d slices composing a 3d volume and between two distinct x’s the number of 2d slices may vary (i.e. one video may have more frames than the other since its a longer shot).

Thanks!

Just a little plug for Dokku as a free/ on-premise option for Heroku buildpacks. I’m using it on a workstation and some rented servers and it’s great PaaS…

Blog post about it here: https://www.christianwerner.net/tech/Deployment-for-cheapskates/

After giving it some thought, I rearranged when pose detection will show up. I believe this will be better as the technique uses both of the topics discussed the previous week. (Object and keypoints)