A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

Hi,

I am trying to use the Migrating notebook from fastai2 to try to use some PyTorch code in fastai2.

It seems that everything work until I tried to fit_one_cycle when I got an error about pbar. Disable it does not seem to solve the issue. Here is the code with the error.

Any ideas how to solve the problem? Thanks

P.S.: Since is only kind of related, should I open a new post?

@Joan did you install the most recent version of fastprogress?

I think so:

fastprogress.__version__
‘0.2.2’

Unsure, I think this would be better as a separate forum post :slight_smile:

1 Like

I was looking today at how to run models with multiple data types (e.g. in my case, tabular and image). I finally managed to find this awesome blog post. It is for fastaiv1 but I will be exploring whereas it works for v2 also or could be easily adapted and update you all. I am sharing the link to it since I believe it could be of your interest at some point of your career!

2 Likes

@mgloria IIRC I believe someone was working on that as we speak (may help you get started) Follow the discussion here:

The hardest part was dealing with the tabular data itself (mixed image and text shouldn’t be hard since they have DataBlock's for them)

2 Likes

I’ve never done a PyTorch course so I have none for you I’m afraid :slight_smile:

A small suggestion @muellerzr , Since you mute some part of the video during the live stream, viewers who view the video after the live stream has to skip them manually, I found this project https://github.com/carykh/jumpcutter which will speed up those segments with no audio, save both your time and the viewers.
I tested it on one of your videos with default parameters and it converted 1h 15mins video down to 48mins video! Saved 17mins!!
I suggest you upload a copy of these edited videos for after stream viewers. :slightly_smiling_face:

5 Likes

@vijayabhaskar great find!!! I’ll definitely do that :slight_smile:

3 Likes

I found this article recently, https://www.techrepublic.com/article/how-to-learn-pytorch-a-resources-guide-for-developers/ which has a list of good resources to learn pytorch, I’m currently going through this https://notebooks.azure.com/pytorch/projects/tutorials which I got to know from the first link, So far I find those notebooks really useful. I also recommend you to look at https://github.com/rasbt/deeplearning-models which is a goldmine!

2 Likes

OK Thanks. Got it to work with Stratified-K Fold with a couple of small changes

1 Like

If you wanted to, you could PR it to the study group repo :wink:

Sorry. Too many other comments, what ifs and other unrelated changes that I made in the nb so I better understand how to work with various concepts so prob best to leave your nb alone.

On top of Gloria’s mods to support Stratified-K Fold that are already incorporated in the nb,
the only two other things changes are:

  1. Use defined skf instead of kf in the loop:
    for _, val_idx in skf.split(np.array(train_imgs[:7220]), train_labels):
  2. as shown on the line above pass in np.arrary(train_imgs[:7220]) instead of np.array(train_imgs) as first parameter to skf.split.

That is it .

I am printing the summary of a datablock
auds.summary(data_p)

But I am getting the following error. I am very new to fastai2, how should I debug it?

Setting-up type transforms pipelines
Collecting items from /content/clips3
Found 2000 items
2 datasets of sizes 1600,400
Setting up Pipeline: (#2) [Transform: True (object,object) -> noop ,Transform: True (object,object) -> create ]
Setting up Pipeline: (#2) [Transform: True (object,object) -> create_label ,Categorize: True (object,object) -> encodes (object,object) -> decodes]

Building one sample
Pipeline: (#2) [Transform: True (object,object) -> noop ,Transform: True (object,object) -> create ]
starting from
/content/clips3/2019-11-14-03-49-41-693116.wav


AttributeError Traceback (most recent call last)
in ()
----> 1 auds.summary(data_p)

1 frames
/usr/local/lib/python3.6/dist-packages/fastai2/data/block.py in _apply_pipeline(p, x)
109 print(f" {p}\n starting from\n {_short_repr(x)}")
110 for f in p.fs:
–> 111 name = f.name
112 try:
113 x = f(x)

AttributeError: ‘Transform’ object has no attribute ‘name’

@shruti_01 It looks like possibly you may be using a custom transform? You should give it a .name property for summary to pick up what to call it

Otherwise not much we can do without seeing how the datablock was formed :slight_smile:

Yes, it’s a custom transform - how do you give a .name property?

I used the fastai2_audio code

def AudioBlock(cls=AudioTensor): return TransformBlock(type_tfms=cls.create, batch_tfms=IntToFloatTensor)

auds = DataBlock(blocks=(AudioBlock, CategoryBlock),
get_items=get_audio_files,
splitter=RandomSplitter(),
get_y=some_function)

Also, I dont see .name property in the original transforms - fastai2_data.block
@muellerzr

I’m unfamiliar with the audio library right now and I know they’re working on adjusting a number of things for v2. You’d want to look at specifically the transforms. It also looks like you’re not including them into your DataBlock call, pass in your item_tfms and batch_tfms there.

got it. I ll look at the pipeline, datablock videos to understand the code.

I am sharing the notebook link since as @Srinivas pointed out a couple important lines are missing on the github repo.

1 Like

This week was again great! I loved the insights on how to debug the code and also the imagenette leaderboard with working code examples!!
Please correct me if I am wrong: these leaderboard code examples take models available at vision.models and play with the parameters. So the architecture of resnet50 is not the same as xresnet50, is it? Is xresnet an improved version?
Is this the complete list of models? I believe we can also use any of these (coming from pytorch) without doing anything special.