Lesson 1 - Official topic

Yah I did that and it didn’t work (forget the error) … if I recall, both were needed.

There’s decades of theories over thousands of papers covering many many aspects of the learning process. We’ll be diving into the bits that are most useful to know in the coming weeks, but those that are interested are most welcome to pick areas that catch their attention to study more closely.

1 Like

That sounds fine :slight_smile: Thanks for checking.

1 Like
pets = DataBlock(blocks=(ImageBlock, CategoryBlock), 
             get_items=get_image_files, 
             splitter=RandomSplitter(),
             get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
             item_tfms=Resize(460),
             batch_tfms=aug_transforms(size=224))

Something thats confusing me in the above snippet (from the fastaiv2 docs) is why we’re using Resize to make the images 460x460, and then using aug_transforms to make them 224x224. What would be the difference if we use just aug_transforms(size=224)) and left out Resize(460) ?

In the past we would’ve done something like .transform(tfms, size=64) or just pass size=bla once to the datablock api. Wondering whats different about this.

Excited to get started :smiley:

Less familiar with the new fastai library at this point, but this is probably because you want to have some extra space to do transformations on. So first resize to 460x460, then you have some extra pixels to work with when you zoom/rotate/etc.

So in a nutshell 460x460 gives you more data to train with via data augmentations than if you went straight to 224x224

1 Like

Can you please move this question to:

https://forums.fast.ai/t/lesson-1-non-beginner-discussion/65642

Stuff like this that we haven’t covered yet in the lesson should be discusssed in the “non-beginner” lesson 1 thread. Thanks! :slight_smile:

1 Like

Sorry! Will do!

1 Like

Thanks! I think I kinda get it, I will move my question to https://forums.fast.ai/t/lesson-1-non-beginner-discussion/65642

We conducted a FastAI Live watch-party with our virtual study group yesterday over Zoom, by re-streaming the lesson at 7 PM IST. Over 70 people joined the stream, it was pretty exciting! Members have requested me to convey their thanks to the FastAI team for allowing them to participate live via the virtual study group.

We’ll be conducting a review & discussion session on Saturday. We got a couple of questions from the participants:

  1. When will the fastai book be available in India? In Amazon India site, I see it’s not available yet.

  2. Any idea how much difference is there between v3 and v4? I don’t think I can learn effectively through these live videos given my current schedule. Since v4 videos won’t be available till July, I just want to understand if I can learn from v3 till then instead.

2 Likes

From what I can see in the book, quite a bit. But know that even over previous iterations of the lectures, the concepts and ideas to how the framework operates as a whole don’t really change. What does change is the API nomenclature and (with v2) added flexibiliity we will be covering. Jeremy is also including bits from the Intro ML and covering some Data Ethics topics as well. For those wanting something to watch, in the meantime I made my Walk with fastai2 lectures as a way to get around the entire API as a whole (and help bridge the gap between v3 and the new library for those who don’t have access to part 4)

1 Like

The comment in the video re colab not saving your work no longer applies. There is a new feature in the colab notebook menu (click the files icon and mount). This mounts your google drive and automatically remounts it every session for that notebook. You can os.chdir("/content/drive/My Drive"); output is saved permanently and you can put your input data on google drive rather than uploading every session. :smiley:

4 Likes

THANKS A LOT for making available such amazing content! :smiling_face_with_three_hearts: I have been having fever for a few days (yes… :mask:) and watching this course online really makes me feel already much better!

3 Likes

I hope you feel better soon! It seems like I am over here too now :confused:

2 Likes

We have directions for working out of Colab, but for instance if you work out of his GitHub, you need to explicitly save it.

Just to add to this, though the videos are not available till July the online version of the book is available (if I’m not wrong). The course follows the book🙂

1 Like

Quick question - I am running the first notebook and wondering why I get the results of 2 epochs if I am only asking for one

Morevoer, is fine_tune() the new version of fit_one_cycle()?

That is because of fine_tune. It does a fit_one_cycle before unfreezing by default. :slightly_smiling_face:

1 Like

Oh, I see! How did you @barnacl manage to get the doc about it? if I do doc(untar_data) I get the info but doc(fine_tune) returns an error…

1 Like

The easiest way would be to append or prepend with ??. So ??fine_tune or fine_tune?? :slightly_smiling_face:

1 Like

Ah, yes! and the key part missing was doc(Learner.fine_tune) otw. no return