A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

Hi – I am trying to run all the notebooks also locally on a GTX1060 and most of them I can get running.

@sgugger - might this be some version issue?

The MNIST project however gives me grief ;-( with the following error.

RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead

This happens in the first learn.lr_find() call. This is after the reporting of the network summary which looks just fine.


The entry into the dump is:


RuntimeError Traceback (most recent call last)
in
----> 1 learn.lr_find()

~\Miniconda3\envs\fastai-py37\lib\site-packages\fastai2\callback\schedule.py in lr_find(self, start_lr, end_lr, num_it, stop_div, show_plot)
194 n_epoch = num_it//len(self.dbunch.train_dl) + 1
195 cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
–> 196 with self.no_logging(): self.fit(n_epoch, cbs=cb)

~\Miniconda3\envs\fastai-py37\lib\site-packages\torch\multiprocessing\reductions.py in reduce_storage(storage)
320 from . import get_sharing_strategy
321 if storage.is_cuda:
–> 322 raise RuntimeError(“Cannot pickle CUDA storage; try pickling a CUDA tensor instead”)
323 elif get_sharing_strategy() == ‘file_system’:
324 metadata = storage.share_filename()

RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead

Kind regards

Raynier.

Does the course-v3 MNIST work? https://github.com/fastai/fastai2/blob/master/nbs/course/lesson7-resnet-mnist.ipynb I checked them all awhile ago but something could’ve changed.

I looking into it - I am trying to run through the segmentation project now… I’ll get back with my findings.

Thanks again for the quick follow up!

Quick edit: Just to report that the notebook runs fin in Colab.

René
Vashon Island (WA)

What is the difference between from fastai2.basics import * and from fastai2 import basics ?

Hi muellerzr hope your well.
I was trying to fix the colab/torchvison issue until 03:00 GMT as I ran the lesson one notebook one day earlier in the day, making my own classifier, I couldn’t understand why it wasn’t working

How did you pinpoint torchvision as the issue?

cheers mrfabulous1 :smiley::smiley:

Hi @mrfabulous1! I saw the numerous issues so first I checked the 3 fastai repositories to see if something was pushed. It wasn’t. This then told me it was PyTorch (probably) given it was with the DataLoaders. From there I checked if torch or torchvision had any recent updates in <4 hours from the post, torchvision was the culprit, releasing 0.5.0

2 Likes

HI NandoBr great post!
Your meetup page looks good.
Do you have an english translation for Heroku? Or should I use google translate? :smiley::smiley::laughing::laughing:
I have an Heroku account which I haven’t used in ages so I will host my next classifier on it.

Many Thanks mrfabulous1:grinning::grinning:

Hi muellerzr
Cheers I’ll try smarter not harder next time!
:laughing::laughing:

1 Like

Importing basics means if we want to do anything with the basics library we need to do basics.myFunc whereas importing everything FROM basics lets us instead just do myFunc

Hi @muellerzr,

First of all, I would like to thank you for all the efforts and the nice course!
I was wondering if its possible to input more than two blocks (x and y) in the data block. For example, I would like to input both image and text together to infer a label. Or more specific to vision, an image and a mask to get a prediction.

Thanks!

Yes! Eventually we will cover this with Bounding Boxes but it is possible. On your datablock you can pass in N blocks to blocks then do n_inp=2 to have the first two be your inputs

IE:

DataBlock(blocks=(ImageBlock,ImageBlock,CategoryBlock), n_inp=2)
(Plus the other DataBlock information)

Then, you could input as much as you want?

Would you give an example of vision + text/tabular during the course? If this is expected to work as easy as this I think is such a great tool!

Also, how do you manage the transforms? Do you pass it as a list for all the inputs you use?

Absolutely you could! I will not have an example of this, I’m sorry. But I do know Jeremy and sgugger have it on their to-do’s.

I am confused on the transforms part. As in how we transform the multiple types?

That’s great! I would look for this docs.

Regarding the transforms, yes, I was thinking if you can apply different transforms to the different inputs ie transforms for images, transforms for text and so on and, if so, how do you specify it in the code.

I am still doing my best to get familiar with the transforms currently. I haven’t looked into that though. Perhaps post in another thread or v2 chat? So it can help others when you get an answer :slight_smile: (and find it easier than buried in this thread)

1 Like

Thank you.

FYI, sgugger posted about this. PyTorch released an update too. Use version 1.4.0

@muellerzr, can you please take a look at my post here Anomaly detection using fast.ai

I know that you will be covering tabular data later on in your course, but I just wanted to start playing around right away. Thank you!

Sorry @mrfabulous1, but I don´t have the deployment tutorial in english yet. It is important to point out that the tutorial is for fastai v1, not for fastai2. I believe it will be necessary to update it with the new library requirements.

2 Likes

I’ve updated the schedule above. This week along with what was originally posted we will go over what deployment looks like using a render template (which in reality is just Starlette) for most any scenario (images, text, and tabular) and running it on your local machine, and exploring and navigating the source code. I’ll release a link tomorrow morning!

Also, next week (week 3) we will go over K-Fold validation as well

2 Likes