Lesson 8 - Official topic

No weights were set to zero. Dropout is applied to activations.

1 Like

A general version of dropout was also proposed much earlier (Hanson 1990) but is rarely cited.

You can find many more instances of these “we or someone else did it first” claims on Schmidhuber’s blog, and here’s a rebuttal from Hinton.

EDIT: rather than adding another post I’m responding here :slight_smile: I’m inclined to agree that it’s a bit of a stretch (in addition to some of the other claims on that blog post). However, Hanson did also co-author another paper in 2018: Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning.

2 Likes

Thanks for sharing. It seems more and more that Schmidhuber and his team or other groups did everything already but that’s a discussion for another day! :wink:

2 Likes

Will back propagation not update weights twice as it is referred twice in the same network?

Why does NLP not use CNNs? Is it because we can’t ‘label’ a body of text?

Here is the questionnaire for chapter 12. It is still a work in progress. Feel free to contribute!:

End of part 1 :frowning:

7 Likes

Thank you for pulling off an amazing class under historical circumstances! Thank you, Rachel, Sylvain, and Jeremy! Thank you, Thank you, Thank you!! And thank you for making us wear masks!

13 Likes

Hanson’s paper seems to be about adding stochastic noise to improve convergence.

2 Likes

You have change the dimensionality in CNN to a 1D CNN. https://pytorch.org/docs/stable/nn.html. 1D CNNs capture the order of the text as well. I have seen them as way of prepossessing.

Are there pointers for easier research papers to implement for DL starters ?

1 Like

I’ve also created a Wiki: 2020 ML Interviews Resources & Advice(s), please contribute!

5 Likes

Definitely suggest kaggle competitions. They can have data separated in other ways besides random split.

Good to get in project groups as well.

1 Like

Will be released a certification, or something to rember this great journey, signed by Jeremy?

Applying CNNs to NLP is apparently a thing, at least before transformers!

2 Likes

End of the class :frowning:

Thanks for a great course amid this pandemic! :mask:

I learned a lot even though I went through last year’s course.

I hope Part 2 is coming up soon! :slight_smile:
:tada::tada::tada:

15 Likes

try https://paperswithcode.com/

1 Like

Mikaela from the USF Data Institute will distribute certificates to those registered for the course (although unfortunately they are not signed).

9 Likes

Thank you @jeremy and @rachel as always, awesome new material in the fastbook! Can’t wait to get the official O’Reilly version. I’m still working on COVID19 and will talk to you all again soon!

6 Likes

Especially the big “Browse State of the Art” button.

1 Like