Lesson 2 discussion - beginner

(Arjun Rajkumar) #44

In the dogs vs cats lesson, we created a sample folder.
But in the dog breed lesson, there was no sample folder.

Curious as to why the sample directory was not created in the dog breed competition?
Is this because we used smaller image sizes to get faster results, and then gradually increased size.
Is this method of using a smaller image size initially a better/faster alternative than creating a sample folder?


Hi @jeremy! How can I disable the cross-validation? I’d like to use the whole dataset to train, but when I set ‘val_idxs’ to None (the default value) in the function ImageClassifierData.from_csv I get the error:

“Arrays used as indices must be of integer (or boolean) type”

And if I use any value smaller than 0.2 in the get_cv_idxs I’m getting the following error in the ConvLearner.pretrained function:

(Rikiya Yamashita) #47

@thiago I have exactly the same question too, thanks for asking :slight_smile:

(Jeremy Howard) #49

The sample folder was part of the original download - it wasn’t created automatically. It was created for last year’s course - we don’t use it any more.

(Jeremy Howard) #50

Ah the problem here is that your dataset size has changed, so your precomputed activations are now the wrong size - so delete your data/dogscats/tmp folder.

(Jeremy Howard) #51

I’m not quite sure what your confusion is - can you tell me in more detail your understanding of what’s happening, and what you aren’t sure about?

(Vikrant Behal) #52

When the model is being trained (and frozen), do we go through all layers or just last few layers?


We still use all the layers, there are no discontinuities, but the frozen layers stay as they were upon freezing - the gradient flows through them but they do not get updated. The data can flow freely but we don’t train them - what they do doesn’t change as a result of training.

So training only makes sense in the context of having something not be frozen - we can freeze all the earlier layer and just have the last layer not be frozen - the data will flow through the neural net freely, but only the last layer will learn.

Does this answer your question?

(Jeremy Howard) #54

Don’t worry about this too much just yet - we’ll deal with all the theory and details in future lessons. For now, focus on using the notebooks to run your own experiments.

(Vikrant Behal) #55

Thanks. What’s the contribution of going through those layers? Updated weights/values which are generated but used only for last few lawyers?


NNs in their simplest form are just functions inside a function. Each layer takes what the previous layer gives it, does some computation on it, and handles it to the layer above.

Most layers that do interesting things not only take the data from the layer below, but also contain some trainable parameters specific to that layer. Still, we cannot just remove a layer if we don’t want to train it - the layers up the layer chain depend on them doing their work, performing their calculations. So by freezing, we still keep the earlier layers in place and have them do their computations, but we don’t train them - we do not alter the trainable parameters as a result of seeing data.

So to sum up, all layers perform their calculations, but it is only non-frozen layers that update their parameters based on the data our network sees.

(Vijay Narayanan Parakimeethal) #57

Hi Sree

Select Forgot Password? in Kaggle Website, you’ll receive an email with a few different options. One of the options lets you set up your own Kaggle username/password and connects it to your google account. You can also go through this forum post on all things related to kaggle-cli http://wiki.fast.ai/index.php/Kaggle_CLI


Thanks @jeremy! Delete the tmp folder allowed me to use smaller validation sets.

But, when I set val_idxs to None (the default value), I’m still getting the error: “Arrays used as indices must be of integer (or boolean) type”

Am I missing something?

(naveen manwani) #59

i was wondering ,why this was used ,could anyone please explain me the intuition behind this step
[Crestle has the datasets required for fast.ai in /datasets, so we’ll create symlinks to the data we want for this competition. (NB: we can’t write to /datasets, but we need a place to store temporary files, so we create our own writable directory to put the symlinks in, and we also take advantage of Crestle’s /cache/ faster temporary storage space.)]

(Jeremy Howard) #60

No I’ve not tested that; it’s a bug! For now just create a list with a single index, e.g. [0]. I’ll try to fix the bug soonish.


How to get ‘class_labels’ as output against ‘class_probability’.
for eg: i need to generate submission file as

filename, class
001, dog
002, cat
003, frog

(Jeremy Howard) #63

data.classes has the class names.


thank you @jeremy


while creating aws instance is there any difference between
1- creating the key pair in aws interface itself and importing to our local
2- creating a key pair in local system and exporting it into aws

If we choose 1st option will aws charge for it ?

(Jeremy Howard) #66

Either approach is fine. I like (2) since you can reuse your key elsewhere.