In the dogs vs cats lesson, we created a sample folder.
But in the dog breed lesson, there was no sample folder.
Curious as to why the sample directory was not created in the dog breed competition?
Is this because we used smaller image sizes to get faster results, and then gradually increased size.
Is this method of using a smaller image size initially a better/faster alternative than creating a sample folder?
Hi @jeremy! How can I disable the cross-validation? Iâd like to use the whole dataset to train, but when I set âval_idxsâ to None (the default value) in the function ImageClassifierData.from_csv I get the error:
âArrays used as indices must be of integer (or boolean) typeâ
Edit:
And if I use any value smaller than 0.2 in the get_cv_idxs Iâm getting the following error in the ConvLearner.pretrained function:
The sample folder was part of the original download - it wasnât created automatically. It was created for last yearâs course - we donât use it any more.
Ah the problem here is that your dataset size has changed, so your precomputed activations are now the wrong size - so delete your data/dogscats/tmp folder.
Iâm not quite sure what your confusion is - can you tell me in more detail your understanding of whatâs happening, and what you arenât sure about?
We still use all the layers, there are no discontinuities, but the frozen layers stay as they were upon freezing - the gradient flows through them but they do not get updated. The data can flow freely but we donât train them - what they do doesnât change as a result of training.
So training only makes sense in the context of having something not be frozen - we can freeze all the earlier layer and just have the last layer not be frozen - the data will flow through the neural net freely, but only the last layer will learn.
Donât worry about this too much just yet - weâll deal with all the theory and details in future lessons. For now, focus on using the notebooks to run your own experiments.
NNs in their simplest form are just functions inside a function. Each layer takes what the previous layer gives it, does some computation on it, and handles it to the layer above.
Most layers that do interesting things not only take the data from the layer below, but also contain some trainable parameters specific to that layer. Still, we cannot just remove a layer if we donât want to train it - the layers up the layer chain depend on them doing their work, performing their calculations. So by freezing, we still keep the earlier layers in place and have them do their computations, but we donât train them - we do not alter the trainable parameters as a result of seeing data.
So to sum up, all layers perform their calculations, but it is only non-frozen layers that update their parameters based on the data our network sees.
Select Forgot Password? in Kaggle Website, youâll receive an email with a few different options. One of the options lets you set up your own Kaggle username/password and connects it to your google account. You can also go through this forum post on all things related to kaggle-cli http://wiki.fast.ai/index.php/Kaggle_CLI
hi
i was wondering ,why this was used ,could anyone please explain me the intuition behind this step
[Crestle has the datasets required for fast.ai in /datasets, so weâll create symlinks to the data we want for this competition. (NB: we canât write to /datasets, but we need a place to store temporary files, so we create our own writable directory to put the symlinks in, and we also take advantage of Crestleâs /cache/ faster temporary storage space.)]
while creating aws instance is there any difference between
1- creating the key pair in aws interface itself and importing to our local
2- creating a key pair in local system and exporting it into aws