A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

@muellerzr Is it possible for you to make a notebook that introduces Higher, middle and lower level fastai APIs for any problem of your choice? Fastai API docs aren’t straight forward enough for me, I’m often confused what to use and when to use something (databunch, datablocks etc…). A clear intro to all the APIs would benefit a lot for beginners.
PS: Not necessarily a video, just a notebook would do, or even a simple flowgraph would do.

3 Likes

That’s the goal of the notebooks. Each will have a very different example and the best way to go about each. From what I’ve found though, 99% of the problems you’ll try and do can come from the mid level API, what we’ll use the most. MNIST shows the lowest level, but I think that’s the only example I’ll wind up using (as literally any other problem can use the mid level DataBlock), possibly tommorow’s lecture may with k fold

2 Likes

yes. torch.randn(n) is just random noise. It is not bias b.

1 Like

@jeremy could you edit the top post and make it a wiki for me please? :slight_smile:

Otherwise here is the link to tommorow’s stream:

We will be going over 03_Multi_Label, 03_Unknown_Labels, 03_Cross_Validation, and 03_Internal_API_Walkthrough

3 Likes

@muellerzr I’m working through the first video and thanks for the github help. As an fyi,I used to use the JS snippet, but I’ve switched to using a chrome plugin called “Download All Images” which I’ve had good luck with. You have to do a little cleanup since it downloads icons, … but it’s nice if you want the images locally.

I have an idea for today’s lesson, let me know if it would be of interest :slight_smile:

Today’s lesson is a bit briefer (in terms of walking through the code, not so much running the code with the KFold) so I was debating on going through some of the super low level API (like what a PILImage is, etc). We will eventually get into it (briefly) next week along with in more detail in 2-3 weeks, but I would like to know if you’d rather do that now :slight_smile:

2 Likes

YES PLEASE :slight_smile: will be very helpful.
Any amount of time spent on this doesn’t seem to be too much :wink:
Thank you @muellerzr

1 Like

Sure :slight_smile: We now have a 03_Internal_API notebook we will go through.

(Also @jeremy can you make the top post a wiki so I can put a link to the new stream? Thank you!!!)

Done.

2 Likes

Thanks!

We’re live and the stream is healthy! :slight_smile: https://youtu.be/pQ7CJzGn6YE

Hi https://forums.fast.ai/u/muellerzr hope you having a beautiful day to day!

Just a quick one, I just had a a quick look at these two notebooks and notice they are named differently inside the notebooks themselves?

03_Multi_Label 04_Multi-Label.ipynb

03_Unknown_Labels Lesson_1.ipynb

Is this correct?

Cheers mrfabulous1 :smiley: :smiley:

Hi muellerzr
Unknown classification - Fantastic
mrfabulous1 :smiley: :smiley:

1 Like

Hey @mrfabulous1! I know I answered this in the video but:

Colab notebooks have their own naming structure and it’s different than the file name so sometimes they may not match when you open it in Colab. So long as it’s the right link you’re set! (I’ll try to work on fixing that when I can)

1 Like

When we dig into the source code and look at the files, where do we get the TEST_IMAGE from?
I know it is possible to download another image and change the name, the reason i wanted to know this because i want to see if the examples are dependent on the size of the TEST_IMAGE.(If some particular tests are being performed etc which would break if i bring in a new image)

Usually if you look at the corresponding notebooks they will show which image they’re using @barnacl :slight_smile:

Any updates on this?

Can you show me what you’re using? And are you on the most recent version? :slight_smile:

1 Like

Most recent version (colab always is lol), I’m following something very similar to your notebook here, just removed the Cuda() tfm

this is what I have:

dset = Datasets(get_image_files('scenes'), tfms=[PILImage.create])
dl = TfmdDL(dset, after_item=[Resize(64), ToTensor()], after_batch=[IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)])

It doesn’t surprise me that that notebook is broken, I haven’t updated it lately :sweat_smile: hmmm. You’re in a CUDA runtime?