A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

Yes, it’s a custom transform - how do you give a .name property?

I used the fastai2_audio code

def AudioBlock(cls=AudioTensor): return TransformBlock(type_tfms=cls.create, batch_tfms=IntToFloatTensor)

auds = DataBlock(blocks=(AudioBlock, CategoryBlock),

Also, I dont see .name property in the original transforms - fastai2_data.block

I’m unfamiliar with the audio library right now and I know they’re working on adjusting a number of things for v2. You’d want to look at specifically the transforms. It also looks like you’re not including them into your DataBlock call, pass in your item_tfms and batch_tfms there.

got it. I ll look at the pipeline, datablock videos to understand the code.

I am sharing the notebook link since as @Srinivas pointed out a couple important lines are missing on the github repo.

1 Like

This week was again great! I loved the insights on how to debug the code and also the imagenette leaderboard with working code examples!!
Please correct me if I am wrong: these leaderboard code examples take models available at vision.models and play with the parameters. So the architecture of resnet50 is not the same as xresnet50, is it? Is xresnet an improved version?
Is this the complete list of models? I believe we can also use any of these (coming from pytorch) without doing anything special.

Now you’re getting the idea :wink: XResNet brings together 5-6 different paper ideas (and still a one or two more). Some of those work, some don’t. It’s experiments people (and Jeremy) have tried. That’s certainly not the limit (you can take any PyTorch model and use it) it’s just what fastai2 has natively

If you want more models to confuse yourself with :wink: take a look at this wonderful repository (seriously, it’s great!)

We’ll go over how to transfer learn direct pretrained weights like the ones here and use them with our models (instead of having the nice option like pretrained=True)

We’ll use a VGG16/19 this week for style transfer too


Will you tell us more about it? It indeed looks like a source of all knowledge :wink:

1 Like

Writing about it now :wink: Expect it after we get done with the style transfer portion of this lesson

1 Like

That’s completely right, and for the best of my knowledge there are no ways of doing this with fastai yet

@jeremy you may be better tasked with knowing this, how would we do this in fastai? (if we can)

Does somebody know an equivalent of this in fastai2, it was fantastic to experiment with a portion of the total dataset

It looks like no but give me a moment.

If you want to try to figure out a way to go about implementing it, take a look at RandomSplitter:

def RandomSplitter(valid_pct=0.2, seed=None, **kwargs):
    "Create function that splits `items` between train/val with `valid_pct` randomly."
    def _inner(o, **kwargs):
        if seed is not None: torch.manual_seed(seed)
        rand_idx = L(int(i) for i in torch.randperm(len(o)))
        cut = int(valid_pct * len(o))
        return rand_idx[cut:],rand_idx[:cut]
    return _inner

I might play with that this weekend if no one else does, i’ve been wanting something like that. only thought i had is if there isn’t shuffling of items somewhere or some kind of check to make sure you are not missing labels, you might end up missing labels in your train/valid.

@foobar8675 I was imagining something similar to this where once after they’re split into train valid then you sample those lists. Or you do a wrapper around any split that way we can get any split function used, not just random splitter.

Hi muellerzr hope all is well!
I built an image classifier model on colab about five minutes ago, I use the starter code for render.com as my baseline.

When trying to deploy the app locally i get the following error.

raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use **torch.load with map_location=torch.device('cpu')** to map your storages to the CPU.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "app/server.py", line 52, in <module>
    learn = loop.run_until_complete(asyncio.gather(*tasks))[0]
  File "/opt/anaconda3/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "app/server.py", line 45, in setup_learner
    raise RuntimeError(message)

This model was trained with an old version of fastai and will not work in a CPU environment.

Please update the fastai library in your training environment and export your model again.

I thought I saw a link where you had created a starter repo for fastai2 but haven’t been able to find it again.

My local machine does not have a GPU but I have at least 70 different classifers which run no problem using fastai1-v3 on my local machine.

What do I need to change or configure to resolve this error?

Many thanks for your help mrfabulous1 :smiley: :smiley:

This means that the model is being loaded in as cuda when in fact you don’t have a GPU. How are you getting the model in? Via load_learner? If so, try doing as the suggestion says, and instead load in your model as:

learn = torch.load('mymodel.pkl', map_location=torch.device('cpu')

Hi muellerzr

learn = torch.load('mymodel.pkl', map_location=torch.device('cpu'))

That’s fabulous I tried something similar but I left ‘with’ in there.

Thanks for your help mrfabulous1 :smiley: :smiley:

1 Like

Doing something very similar right now, that’s how I caught it, I did the same thing of sorts :slight_smile:

1 Like

Thanks for doing this. I’m mostly interested in NLP and also tabular data these days but I’m starting to watch the videos and catching up now as all my work so far has been with version 1. Eagerly awaiting Block 2+3 :smiley:

1 Like

Hi @muellerzr,

Would I be able to join the group just for the tabular data lectures without going through the previous lectures? Please let me know.