Lesson 4 In-Class Discussion

Removing the Learned weights in such a way that the calculation works. But overfitting is avoided.

activations don’t have weights.

1 Like

Would adding dropout make training slower?

any idea what the “ps” stands for?

The abstract of the dropout article seems perfectly servicable.

Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
`

Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, Journal of Machine Learning Research, 2014

8 Likes

I guess number of epochs remain the same. So, no.

probabilities?

percentages?

What happens at test time?

I think it is p for one but you have many so that is why ps.

4 Likes

Yep, multiple P = Ps. Thanks!

Do you need dropout if you are doing batch normalization?

Yes, these are different techniques.

What are the recommended value(s) to be set to the dropout probability?

1 Like

Can a dropout layer be placed anywhere in the network or can it only succeed a BatchNorm layer?

It depends on your problem. These are hyperparameters.

Kind of a basic question: is the last column in learn.fit accuracy on the training set or the validation set?

1 Like

Yes, but it seems in the current literature many people are using either dropout or batch normalization. I am wondering what is the trade-off between these two techniques as batchnorm also seems to claim to help generalization.

Higher the value, better the generalisation but lesser the accuracy.
Lower the value, lesser the generalisation but better the accuracy.

the more of each image you have the lower your dropout in general?