I was wondering about that because it seemed like we’d go through papers?

Related to @mayeesha’s question - I’m interested in recommended learning resources (beyond the ones already mentioned in MOOC1) to help us refresh the prerequisite math for reading the papers.

Great question. I’m also curious to hear perspectives on this. My head hurts looking at some of these papers, but I found even a tiny bit of Linear Algebra and Calculus goes a long way. I spent time reviewing these topics during part one and summarized a few of the more relevant concepts here:

http://wiki.fast.ai/index.php/Calculus_for_Deep_Learning

http://wiki.fast.ai/index.php/Linear_Algebra_for_Deep_Learning

Work in progress! Feel free to contribute.

If you are not sure how to approach a research paper you can also read this quick guide: How to read a paper.

The course assumes that you understand matrix products, and the chain rule, plus all the concepts that were introduced in part 1 (eg softmax, sigmoid, relu, convolutions).

What you saw in lesson 8 yesterday is the general level of math you can expect. The idea is to learn to read papers by understanding the concepts, rather than the math, and over time to learn to recognize some of the key equations and probability concepts when you see them.

Most importantly, the course assumes that you’re OK with looking at material that initially looks somewhat forbidding, and having the tenacity to stick with it! In particular, asking your fellow participants here when you get stuck is a great way to keep moving forwards. The general rule from part 1 was: if stuck, spend 30 mins trying to figure it out yourself (including googling and searching the forum), and then ask for help.

Most importantly, the course assumes that you’re OK with looking at material that initially looks somewhat forbidding, and having the tenacity to stick with it!

That seems like the real thing. Thanks to everyone for all the recommendations! I feel like I still need some time to get used to the whole real time class and communicating in forums, but hopefully it’d be manageable. I’ll start with grinding on the wiki + the part 1 forums + this weeks lesson then.

You can do it! And we’re all here to support you

Thanks.

This is so awesome. Thanks, Brendan!

Just came across Andrej Karpathy’s rather mathematically minimal “Hacker’s Guide To Neural Networks”. Apologies if this has been posted before, but it leverages more code / physical intuition over hardcore math and thus seemed appropriate for sharing in this thread.

Haven’t seen this before - saved to my ever growing to read list!