On the overwhelming amount of jargon in Deep Learning

J: There is an enormous amount of jargon in deep learning, including terms like rectified linear unit . The vast vast majority of this jargon is no more complicated than can be implemented in a short line of code, as we saw in this example. The reality is that for academics to get their papers published they need to make them sound as impressive and sophisticated as possible. One of the ways that they do that is to introduce jargon. Unfortunately, this has the result that the field ends up becoming far more intimidating and difficult to get into than it should be. You do have to learn the jargon, because otherwise papers and tutorials are not going to mean much to you. But that doesn’t mean you have to find the jargon intimidating. Just remember, when you come across a word or phrase that you haven’t seen before, it will almost certainly turn out to be referring to a very simple concept.

Jeremy, thanks so much for writing this section. (extracted from 04_mnist_basics.ipynb)

I think I almost dropped a tear of relief and understanding after reading this. The jargon has been, by far, my biggest challenge in getting into deep learning. Always made me feel so dumb! Good to finally find an explanation, even if it makes me a little bit mad.

Anyway… I felt like I needed to vent my frustration :slight_smile:
To everyone else here: don’t feel dumb! it’s the jargon that’s dumb!

5 Likes