Fastbook Chapter 7 questionnaire solutions (wiki)

Here are the questions for Chapter 7:

Questionnaire

What is the difference between ImageNet and Imagenette? When is it better to experiment on one versus the other?

Imagenette is a smaller version of Imagenet, we can use it to make baseline models without the need of a lot of compute resources

What is normalization?

normalization is a technique we can use to adjust all of the images we will use for our model

Why didn’t we have to care about normalization when using a pre-trained model?

because the fastai library automatically adds the normalized part on the pre-trained model

What is progressive resizing?
pr
ogressive resizing means that during the training of our model, we will progressively resize the images from small to bigger sizes

What is test time augmentation? How do you use it in fastai?
tta is a technique that allows us to test a single image, with multiple versions, combine the predictions, and use the combined result for that specific image

Is using TTA at inference slower or faster than regular inference? Why?
tta will be slower because it means we are using compute resources to average the number of images at a time to process a single image

What is Mixup? How do you use it in fastai?
mixup is a technique that allows us to take 2 images, take a random weight value, and according to that value we will get a combination of the 2 images as well as the label for predicting that same image

Why does Mixup prevent the model from being too confident?

because it blends images and labels, leading to a model that will generalize better

Why does training with Mixup for five epochs end up worse than training without Mixup?
because mixup requires far more epochs to train to get better accuracy, this is because it’s harder to see what is on each image, the model has to predict 2 labels per image rather than one

What is the idea behind label smoothing?
the idea behind label smoothing is that we can use this technique to use softer target values when predicting with our model, instead of [2, 3] we use [1.9, 2.8]

What problems in your data can label smoothing help with?
helps with overfitting

When using label smoothing with five categories, what is the target associated with the index 1?
\epsilon=0.1

True Class (index 1): 1 - \epsilon + \frac{\epsilon}{5} = 0.9 + 0.02 = 0.92

Other Classes: \frac{\epsilon}{5}=0.2

[0.02, 0.92, 0.02, 0.02, 0.02]

What is the first step to take when you want to prototype quick experiments on a new dataset?

if our dataset is big, there is no point in prototyping the whole dataset, find a small subset that is representative of the whole as we did with Imagenette

1 Like