Did YOU do the homework? 😄

Thanks, I was worried I was behind when people posted questions beyond 01_intro. Do we have deliverables when we finish chapter 1, or are we just required to learn the material and be able to run the code?

The homework is not checked or anything like this. Anything we do in this course is just for us - meaning you should do whatever you feel is helping you learn :slight_smile:

Part of what seems to work very well (as Jeremy suggests in the lecture) is running code, seeing how things change with changes to inputs, checking out the docs, playing our with jupyter notebook -> getting acquainted with the whole ecosystem.

This thread is about giving people a bit of a helping hand with what they can do for the first lecture to get going, but generally you can come up with anything you feel would help you learn (running the code on your own data or some other dataset - maybe even one built into fast.ai, this links to fastai v1 though, writing a post on the forum explaining something, asking a question, writing a blog post, creating a NB on something that interests you, pushing to github and sharing on the forums, etc).

I am not sure if it’s part of the top-down way of learning (didn’t read the book) but in the way the fast.ai courses play out, you control your destiny, or what you get from the course! Sounds very similar to life in that regard :slight_smile:

11 Likes

This is gold. :slight_smile:

6 Likes

Hi Radek,
Where can I find information on how to do this:

  • see if you can grab the fast.ai documentation notebooks and try running them
2 Likes

Hi @reshama,
I believe This is the link you’re looking for.

2 Likes

hi @radek and fellow members, can someone please update the Did YOU do the homework? :smile: with this week’s homework and topics that we can study ourselves that will be suitable for the course.

1 Like

@0tist Please don’t hesitate to do it yourself, We’re all here to learn even though our speeds vary since we started our “walks with fastai” at different points in time, but the great thing is we’re here. :slight_smile:

Most of the times someone on the forums start something and many people follow. Radek might say this is similar to how it happens in life :slight_smile:

3 Likes

hi @0tist just added things from lecture2 that can be interpreted as homework in my perspective & thanks @radek for starting this thread, it has really helped me

Regarding the instruction to read and understand the #Click me cell of Chapter 1, these are my thoughts and questions. As will be obvious, I’m a novice programmer.

from fastai2.vision.all import *
Import everything (classes, libraries, etc.) from the fastai vision library

path = untar_data(URLs.PETS)/'images'
I had a misunderstanding about this one. I thought untar_data(URLs.PETS) was downloading the URLs of the pet images, possibly because I’m predisposed to think of downloading URLs for the classifier for Lesson 1 from v3, but also because it’s URLs plural, not URL. So I checked the docs, and it turns out there’s a URLs class we’re using, and PETS is one of its methods. There are similar URLs methods for other datasets, but only the fastai ones. This approach doesn’t generalize to non-fastai datasets (but we’ll be learning other approaches that do generalize!).

So the dataset is extracted, and the location of the extracted dataset is returned to path. But what does the /'images' at the end do? I searched the forum and found the notes from Lesson 3 of v3, and if I’m extrapolating correctly, I think the pets dataset has a folder named ‘images’, and we’re telling the path to point specifically to that folder, rather than to the dataset folder as a whole. Is that right?

def is_cat(x): return x[0].isupper()
Define a function is_cat to which we pass x, the filename of each pet image. A characteristic of this particular dataset is that the first character of the filename is uppercase if the file is an image of a cat, so is_cat returns True if the first character of x is uppercase, and False otherwise.

dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

I have questions about this one, too. The book says:

" The fourth line tells fastai what kind of dataset we have, and how it is structured. There are various different classes for different kinds of deep learning dataset and problem–here we’re using ImageDataLoaders . The first part of the class name will generally be the type of data you have, such as image, or text. The second part will generally be the type of problem you are solving, such as classification, or regression."

What is “the second part of the class name” that is “the type of problem you are solving…”? We’re doing classification, but it’s not obvious to me where that’s declared in the class name.

Then we’re using the from_name_func method of the ImageDataLoaders class, which creates our DataLoaders (dls as we’re calling them here), setting aside 20% of our data as the validation set, setting the optional seed value to 42, setting the labelling function to be our is_cat function defined above, and selecting the Resize(224) as the transformation to be applied to the images, resizing them all to 224x244 pixels for historical reasons.

But why is the seed set to 42? The book and the docs say it’s for reproducibility, and I understand that getting the same validation set every time is what gives us reproducible results, but what is a seed, and how does it achieve a reproducible validation set? I Googled “reproducibility seed” and found this post helpful:

“The “seed” is a starting point for the sequence and the guarantee is that if you start from the same seed you will get the same sequence of numbers.”

But if the elements of the validation set are chosen randomly, how does starting from the same point help? And why 42? Is there a practical consideration at work, or is it just Douglas Adams?

learn = cnn_learner(dls, resnet34, metrics=error_rate)
Use the cnn (convolution neural network) learner, telling it to use the dls we established above, the ResNet34 architecture, and the error rate as a metric. Pretty straightforward for me.

learn.fine_tune(1)
Since we’re using a pretrained model, we don’t want to start fitting the model from scratch, as we would if we used learn.fit. Instead, we’ll fine-tune the model for our particular dataset for one epoch (a complete pass through the dataset) to create the head of our model, which is unique to this dataset. The book says:

“After calling fit , the results after each epoch are printed, showing the epoch number, the training and validation set losses (the “measure of performance” used for training the model), and any metrics you’ve requested (error rate, in this case).”

But it must mean “After calling fine_tune.”

I did have another hiccup, trying to use ?? to see the docs for methods, e.g. ??cnn_learner; I keep getting an error “Object cnn_learner not found.” Other shortcuts such as b to create a new cell are working for me, so I’m not sure what I’m doing wrong with this one.

And that’s the lot! Thanks for reading all of this, and please let me know if you can answer any of my questions, or if I’ve mischaracterized anything.

1 Like

This is a good question. Setting the random seed to the same value guarantees that every time you run your model it will generate and consume exactly the same stream of random numbers, and therefore will get the same results. This is useful because when you are modifying or debugging the code, you can always compare your results against a baseline (the results with this random number seed) to check that you haven’t inadvertently changed anything.

1 Like

Thanks very much! I think my problem was naiveté: I was too willing to believe in the true randomness of the numbers chosen, which isn’t possible.

I’m still not completely clear on why we’re seeding with 42 in particular, but I’m just going to assume it’s because it’s the answer to life, the universe, and everything unless told otherwise.

3 Likes

Of course that’s why it’s 42! Trust your intuition on that one.

1 Like

@radek are there going to be lectures 3 and 4 sections? seeing you list out the bullet points really helped me focus :slight_smile:

I intended this to be just something for the first lecture, to get people started. I am preparing something that will help with reviewing some of the material for each lecture but realistically it is at least a couple of weeks from completion.

But can share an early version if there would be interest.

5 Likes

Yes, please! :smiley:

2 Likes

Thank you for the suggestions!
I will add one that works for me. I am doing this course not the first time, so I try to accumulate knowledge from several lectures and than practice training models from scratch (I mean from a blank notebook, but for sure use imagenet pre-trained model, Transfer learning is the greatest tool!)
So now I’m watching lesson 6 and working on Kaggle competition https://www.kaggle.com/c/plant-pathology-2020-fgvc7 - it is pretty small dataset with several things that I have to change. It is classification, but augmentations that were described in 1 and 3 lessons may be enhanced with bigger crop and bigger rotation. It is not straight forward multiclass or multilabel task, so I want to train one network to classify “True false” and another to classify the diseases (one, another or multiple). Another thing to work on is TTA, we have lots of computational time to get best results, so this is an opportunity to do some extended homework and learn about models ensemble.
For sure, it is always a lot of peeping into lessons notebooks, but after several notebooks from scratch it is a great feeling that you know exactly what to do to solve minimal tasks.
Happy learning everyone

1 Like

Yes, interested

Hey Radek,
the stuff that’s been put together above above is fantastic.
Did you manage to put together a ‘breakdown’ per lecture? it’d be cool to see if so :slight_smile:

2 Likes

Not sure we ever had a separate topic for this, but an idea I had was to convert numerical or alphanumerical data into a quick response code and use that to train a model. I don’t have the complete process in my head yet, like how to separate train and validate data and perhaps the idea is not a feasible one, I would appreciate any comments, note I don’t have a specific application in mind just general thought here.

Thanks Radek for this thread,

I was really confused regarding Homework for chapter 1. turns out I have already done it.