A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

This is awesome news count me in!
Could you include some time series tabular discussion in the tabular section?
Happy New Year!!
Tom

1 Like

Happy New Year everyone, we are just two weeks away! I’m very excited to do this with you all :slight_smile:

Here is the YouTube channel everything will be uploaded to:

Youtube

The plan is shortly after the streams have ended they will be posted.

On tabular:
I will do my best! The time-series library for v2 is currently not done, so I cannot dive into that. But I do have a few techniques (par if it winds up being post conference I plan on going to) I will present on. If not by that time, when I can I’ll make a notebook :slight_smile:

9 Likes

Would love to see the revised notebook and results, comparing TabNet, Node, and Fastai.

1 Like

As we’re about one week away, here are some major changes I am making to the schedule:

Week 5 will just be style transfer, we will go over how to build a model and loss function from scratch and then utilize the nbdev library to convert all of our made code into something necessary for deployment (we will also deploy with the minimal fastai code as possible)

As a result, week 6 will cover pose regression (both with the standard regression fastai allows and with a new heatmap-based technique). I’m really excited for how the lesson 5 will be played out, and I hope you will be too! See you all on Wednesday! :slight_smile:

11 Likes

One last update before we begin I swear :wink: I’ve adjusted the first post to reflect the current schedule I have planned. There will be a break week between the Image block and the Tabular block due to Spring Break :slight_smile: (Aka no session on the week of March 8th). Lastly, for those wanting to know what all you need beforehand (one last time) please have a Google account so we can utilize Google Colab for the first lesson. For 90% of this you shouldn’t need to spend a dime on GPU credits :slight_smile:

Soft Release of the Interview with @muellerzr -Please DO NOT Share outside of forums yet.

Hi Everyone! As you might know, I follow a Sunday and Thursday release but since Zach’s SG Starts on the 15th, I’m doing a “Soft Release” and hoping to do a proper release later.

This interview covers the course and much more-Top Down Learning, fast.ai course as well, So I’d recommend it as a soft pre-req with a def pre-req of a strong cup of Chai :smiley:

I’ll release the interview later but I wanted to share this here to give an insight about the SG. I hope you enjoy it as much as I did :slight_smile:

16 Likes

Thanks for having me on @init_27 it was a lot of fun :slight_smile: heads up for you guys, potential course/study group spoilers in the interview :wink:

8 Likes

That’s quite informative. Back here after a while, looking forward to catching up on all the good stuff that happened last year!

2 Likes

Thanks for this! Looking forward to the study group (and subsequently becoming a Kaggle grandmaster 48 hours later) :v:

2 Likes

@muellerzr This looks great! Can’t wait to join in.

I was wondering if you plan on covering how to use other PyTorch models with fastai in some depth.

I know that the easiest way is

model = torchvision.models.mobilenet_v2(pretrained=True)
model.classifier.out_features=data.c
learn = Learner(data, model, metrics=accuracy)

That’s fine to get some basic stuff going, but calling this Learner constructor loses the layer groups of the model. The output of len(learn.layer_groups) equals 1.
As a result, you can’t use differential learning rates, or freeze the model after having unfrozen it once – that’s just tragic.

I’ve seen this issue brought up at different places (I haven’t bookmarked them so can’t share links right now), but no definitive answer.

I’ll confess that I haven’t dug deep for a solution, but I believe this would be immensely useful and a great addition to the course, as it opens up an entire world of under-explored PyTorch models.

2 Likes

@rsomani95 I will be! And you actually can, you just need to declare your layer groups and split it. Look at the splits (this is fastaiv1 code: https://github.com/fastai/fastai/blob/master/fastai/vision/learner.py) and see how they are applied :wink: (Also follow how cnn_learner is actually getting our differential learning rates and building our model. It should provide some answers)

When we discuss bringing in outside models I’ll show how we can generate these layer groups and “freeze” to transfer learn like how we do with our cnn models. There will be more than a few lessons where we will be bringing in outside models into the framework for a variety of tasks (vision, NLP, and tabular)

5 Likes

That’s super helpful, thank you!

Super! Can’t wait :smiley:

1 Like

Hi muellerzr hope all is well and you are having a superb day!

Has the streaming URL been released for tonights lesson yet?
If so can you just confirm what it will be or what time it will be available.

I know the course starts at The livestreams will be from 5pm to 7:30pm Central Standard Time on Wednesdays.
For me this is normally bed time so I would like to check everything a few minutes before the stream starts.

Cheers mrfabuous1 :smiley::smiley: ps. nice video!

1 Like

Hi All!
To help @muellerzr with the SG, I’ll volunteer to be an unofficial TA.
Please feel free to @ me for any/all questions-I’ll be taking up the SG and even though I might not be able to provide as wise answers as Zach, I’d be happy to make myself available.

Looking forward to being a part of the SG and walking the walk w everyone :smiley:

Best Regards,
Sanyam

10 Likes

What is the link to join the study group please?

1 Like

You’re already here :slight_smile: if it gets too cluttered I may make separate threads for the topics (vision, tabular, NLP)

@mrfabulous1 I will post a link to where it should go live at here later today.

4 Likes

Note that we have a complete example of this here:

2 Likes

Thanks for the response! :slight_smile:

I’ve gone through an older version of this notebook, but it doesn’t have exactly what I was asking here:


I was asking about Learner, which the notebook has an example of, but after following @muellerzr’s response, I realised what I actually wanted to construct was a cnn_learner. Here’s how I did it:

arch  = torchvision.models.mobilenet_v2
mobilenet_split = lambda m: (m[0][0][10], m[1])

learn = cnn_learner(data, arch, pretrained=True, cut=-1,
                    split_on=mobilenet_split)

For future readers: cut was essential to make cnn_learner work, and mobilenet_split divides the model into 3 layer groups, allowing you to make use of different learning rates for different layer groups.


UPDATE: This is now part of the fastai library, courtesy of this PR, so you can simply do this in code:

learn = cnn_learner(data, models.mobilenet_v2, pretrained=True)
5 Likes

Here is the link for the stream:

(It will go live at 4:45 CST)

8 Likes

@muellerzr I suggest you put this in the top post too.

1 Like