Did YOU do the homework? šŸ˜„

Thanks, I was worried I was behind when people posted questions beyond 01_intro. Do we have deliverables when we finish chapter 1, or are we just required to learn the material and be able to run the code?

The homework is not checked or anything like this. Anything we do in this course is just for us - meaning you should do whatever you feel is helping you learn :slight_smile:

Part of what seems to work very well (as Jeremy suggests in the lecture) is running code, seeing how things change with changes to inputs, checking out the docs, playing our with jupyter notebook -> getting acquainted with the whole ecosystem.

This thread is about giving people a bit of a helping hand with what they can do for the first lecture to get going, but generally you can come up with anything you feel would help you learn (running the code on your own data or some other dataset - maybe even one built into fast.ai, this links to fastai v1 though, writing a post on the forum explaining something, asking a question, writing a blog post, creating a NB on something that interests you, pushing to github and sharing on the forums, etc).

I am not sure if itā€™s part of the top-down way of learning (didnā€™t read the book) but in the way the fast.ai courses play out, you control your destiny, or what you get from the course! Sounds very similar to life in that regard :slight_smile:

11 Likes

This is gold. :slight_smile:

6 Likes

Hi Radek,
Where can I find information on how to do this:

  • see if you can grab the fast.ai documentation notebooks and try running them
2 Likes

Hi @reshama,
I believe This is the link youā€™re looking for.

2 Likes

hi @radek and fellow members, can someone please update the Did YOU do the homework? :smile: with this weekā€™s homework and topics that we can study ourselves that will be suitable for the course.

1 Like

@0tist Please donā€™t hesitate to do it yourself, Weā€™re all here to learn even though our speeds vary since we started our ā€œwalks with fastaiā€ at different points in time, but the great thing is weā€™re here. :slight_smile:

Most of the times someone on the forums start something and many people follow. Radek might say this is similar to how it happens in life :slight_smile:

3 Likes

hi @0tist just added things from lecture2 that can be interpreted as homework in my perspective & thanks @radek for starting this thread, it has really helped me

Regarding the instruction to read and understand the #Click me cell of Chapter 1, these are my thoughts and questions. As will be obvious, Iā€™m a novice programmer.

from fastai2.vision.all import *
Import everything (classes, libraries, etc.) from the fastai vision library

path = untar_data(URLs.PETS)/'images'
I had a misunderstanding about this one. I thought untar_data(URLs.PETS) was downloading the URLs of the pet images, possibly because Iā€™m predisposed to think of downloading URLs for the classifier for Lesson 1 from v3, but also because itā€™s URLs plural, not URL. So I checked the docs, and it turns out thereā€™s a URLs class weā€™re using, and PETS is one of its methods. There are similar URLs methods for other datasets, but only the fastai ones. This approach doesnā€™t generalize to non-fastai datasets (but weā€™ll be learning other approaches that do generalize!).

So the dataset is extracted, and the location of the extracted dataset is returned to path. But what does the /'images' at the end do? I searched the forum and found the notes from Lesson 3 of v3, and if Iā€™m extrapolating correctly, I think the pets dataset has a folder named ā€˜imagesā€™, and weā€™re telling the path to point specifically to that folder, rather than to the dataset folder as a whole. Is that right?

def is_cat(x): return x[0].isupper()
Define a function is_cat to which we pass x, the filename of each pet image. A characteristic of this particular dataset is that the first character of the filename is uppercase if the file is an image of a cat, so is_cat returns True if the first character of x is uppercase, and False otherwise.

dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

I have questions about this one, too. The book says:

" The fourth line tells fastai what kind of dataset we have, and how it is structured. There are various different classes for different kinds of deep learning dataset and problemā€“here weā€™re using ImageDataLoaders . The first part of the class name will generally be the type of data you have, such as image, or text. The second part will generally be the type of problem you are solving, such as classification, or regression."

What is ā€œthe second part of the class nameā€ that is ā€œthe type of problem you are solvingā€¦ā€? Weā€™re doing classification, but itā€™s not obvious to me where thatā€™s declared in the class name.

Then weā€™re using the from_name_func method of the ImageDataLoaders class, which creates our DataLoaders (dls as weā€™re calling them here), setting aside 20% of our data as the validation set, setting the optional seed value to 42, setting the labelling function to be our is_cat function defined above, and selecting the Resize(224) as the transformation to be applied to the images, resizing them all to 224x244 pixels for historical reasons.

But why is the seed set to 42? The book and the docs say itā€™s for reproducibility, and I understand that getting the same validation set every time is what gives us reproducible results, but what is a seed, and how does it achieve a reproducible validation set? I Googled ā€œreproducibility seedā€ and found this post helpful:

ā€œThe ā€œseedā€ is a starting point for the sequence and the guarantee is that if you start from the same seed you will get the same sequence of numbers.ā€

But if the elements of the validation set are chosen randomly, how does starting from the same point help? And why 42? Is there a practical consideration at work, or is it just Douglas Adams?

learn = cnn_learner(dls, resnet34, metrics=error_rate)
Use the cnn (convolution neural network) learner, telling it to use the dls we established above, the ResNet34 architecture, and the error rate as a metric. Pretty straightforward for me.

learn.fine_tune(1)
Since weā€™re using a pretrained model, we donā€™t want to start fitting the model from scratch, as we would if we used learn.fit. Instead, weā€™ll fine-tune the model for our particular dataset for one epoch (a complete pass through the dataset) to create the head of our model, which is unique to this dataset. The book says:

ā€œAfter calling fit , the results after each epoch are printed, showing the epoch number, the training and validation set losses (the ā€œmeasure of performanceā€ used for training the model), and any metrics youā€™ve requested (error rate, in this case).ā€

But it must mean ā€œAfter calling fine_tune.ā€

I did have another hiccup, trying to use ?? to see the docs for methods, e.g. ??cnn_learner; I keep getting an error ā€œObject cnn_learner not found.ā€ Other shortcuts such as b to create a new cell are working for me, so Iā€™m not sure what Iā€™m doing wrong with this one.

And thatā€™s the lot! Thanks for reading all of this, and please let me know if you can answer any of my questions, or if Iā€™ve mischaracterized anything.

1 Like

This is a good question. Setting the random seed to the same value guarantees that every time you run your model it will generate and consume exactly the same stream of random numbers, and therefore will get the same results. This is useful because when you are modifying or debugging the code, you can always compare your results against a baseline (the results with this random number seed) to check that you havenā€™t inadvertently changed anything.

1 Like

Thanks very much! I think my problem was naivetĆ©: I was too willing to believe in the true randomness of the numbers chosen, which isnā€™t possible.

Iā€™m still not completely clear on why weā€™re seeding with 42 in particular, but Iā€™m just going to assume itā€™s because itā€™s the answer to life, the universe, and everything unless told otherwise.

3 Likes

Of course thatā€™s why itā€™s 42! Trust your intuition on that one.

1 Like

@radek are there going to be lectures 3 and 4 sections? seeing you list out the bullet points really helped me focus :slight_smile:

I intended this to be just something for the first lecture, to get people started. I am preparing something that will help with reviewing some of the material for each lecture but realistically it is at least a couple of weeks from completion.

But can share an early version if there would be interest.

4 Likes

Yes, please! :smiley:

2 Likes

Thank you for the suggestions!
I will add one that works for me. I am doing this course not the first time, so I try to accumulate knowledge from several lectures and than practice training models from scratch (I mean from a blank notebook, but for sure use imagenet pre-trained model, Transfer learning is the greatest tool!)
So now Iā€™m watching lesson 6 and working on Kaggle competition https://www.kaggle.com/c/plant-pathology-2020-fgvc7 - it is pretty small dataset with several things that I have to change. It is classification, but augmentations that were described in 1 and 3 lessons may be enhanced with bigger crop and bigger rotation. It is not straight forward multiclass or multilabel task, so I want to train one network to classify ā€œTrue falseā€ and another to classify the diseases (one, another or multiple). Another thing to work on is TTA, we have lots of computational time to get best results, so this is an opportunity to do some extended homework and learn about models ensemble.
For sure, it is always a lot of peeping into lessons notebooks, but after several notebooks from scratch it is a great feeling that you know exactly what to do to solve minimal tasks.
Happy learning everyone

1 Like

Yes, interested

Hey Radek,
the stuff thatā€™s been put together above above is fantastic.
Did you manage to put together a ā€˜breakdownā€™ per lecture? itā€™d be cool to see if so :slight_smile:

2 Likes

Not sure we ever had a separate topic for this, but an idea I had was to convert numerical or alphanumerical data into a quick response code and use that to train a model. I donā€™t have the complete process in my head yet, like how to separate train and validate data and perhaps the idea is not a feasible one, I would appreciate any comments, note I donā€™t have a specific application in mind just general thought here.

Thanks Radek for this thread,

I was really confused regarding Homework for chapter 1. turns out I have already done it.