This course is designed to stand alone, so I’d rather not refer to previous versions. fastai v2 is a rewrite from scratch, so all the API has changed.
Or conda install graphviz
Yah I did that and it didn’t work (forget the error) … if I recall, both were needed.
There’s decades of theories over thousands of papers covering many many aspects of the learning process. We’ll be diving into the bits that are most useful to know in the coming weeks, but those that are interested are most welcome to pick areas that catch their attention to study more closely.
That sounds fine Thanks for checking.
pets = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224))
Something thats confusing me in the above snippet (from the fastaiv2 docs) is why we’re using Resize to make the images 460x460, and then using aug_transforms
to make them 224x224. What would be the difference if we use just aug_transforms(size=224))
and left out Resize(460)
?
In the past we would’ve done something like .transform(tfms, size=64)
or just pass size=bla
once to the datablock api. Wondering whats different about this.
Excited to get started
Less familiar with the new fastai library at this point, but this is probably because you want to have some extra space to do transformations on. So first resize to 460x460, then you have some extra pixels to work with when you zoom/rotate/etc.
So in a nutshell 460x460 gives you more data to train with via data augmentations than if you went straight to 224x224
Can you please move this question to:
Stuff like this that we haven’t covered yet in the lesson should be discusssed in the “non-beginner” lesson 1 thread. Thanks!
Sorry! Will do!
Thanks! I think I kinda get it, I will move my question to https://forums.fast.ai/t/lesson-1-non-beginner-discussion/65642
We conducted a FastAI Live watch-party with our virtual study group yesterday over Zoom, by re-streaming the lesson at 7 PM IST. Over 70 people joined the stream, it was pretty exciting! Members have requested me to convey their thanks to the FastAI team for allowing them to participate live via the virtual study group.
We’ll be conducting a review & discussion session on Saturday. We got a couple of questions from the participants:
-
When will the fastai book be available in India? In Amazon India site, I see it’s not available yet.
-
Any idea how much difference is there between v3 and v4? I don’t think I can learn effectively through these live videos given my current schedule. Since v4 videos won’t be available till July, I just want to understand if I can learn from v3 till then instead.
From what I can see in the book, quite a bit. But know that even over previous iterations of the lectures, the concepts and ideas to how the framework operates as a whole don’t really change. What does change is the API nomenclature and (with v2) added flexibiliity we will be covering. Jeremy is also including bits from the Intro ML and covering some Data Ethics topics as well. For those wanting something to watch, in the meantime I made my Walk with fastai2 lectures as a way to get around the entire API as a whole (and help bridge the gap between v3 and the new library for those who don’t have access to part 4)
The comment in the video re colab not saving your work no longer applies. There is a new feature in the colab notebook menu (click the files icon and mount). This mounts your google drive and automatically remounts it every session for that notebook. You can os.chdir("/content/drive/My Drive"); output is saved permanently and you can put your input data on google drive rather than uploading every session.
THANKS A LOT for making available such amazing content! I have been having fever for a few days (yes… ) and watching this course online really makes me feel already much better!
I hope you feel better soon! It seems like I am over here too now
We have directions for working out of Colab, but for instance if you work out of his GitHub, you need to explicitly save it.
Just to add to this, though the videos are not available till July the online version of the book is available (if I’m not wrong). The course follows the book🙂
Quick question - I am running the first notebook and wondering why I get the results of 2 epochs if I am only asking for one
Morevoer, is fine_tune() the new version of fit_one_cycle()?
That is because of fine_tune. It does a fit_one_cycle before unfreezing by default.
Oh, I see! How did you @barnacl manage to get the doc about it? if I do doc(untar_data) I get the info but doc(fine_tune) returns an error…