Lesson 6 - Foundations of Convolutional Neural Networks
This topic is for official updates and information regarding lesson 6. Only admins are able to reply to this thread, so please subscribe to topic notifications to ensure you don’t miss anything. You should also follow the general course update thread.
Note that this is a forum wiki thread, so you all can edit this post to add/change/organize info to help make it better! To edit, click on the little edit icon at the bottom of this post. Here’s a pic of what to look for:
Lesson Resources
- Detailed lesson notes - thanks to @hiromi
- Notebooks:
- Lesson 6 in-class discussion thread
- Lesson 6 advanced discussion
- [Lesson 6 Review - slides from TWiML Study Group 3/09/2019 See slide #5 for analysis of @Jeremy’s matrix multiplication notation]TWiML_Fastai_course1v3_lesson6.pdf (531.3 KB)
Other Resources
-
Convolutions: http://www.cs.cornell.edu/courses/cs1114/2013sp/sections/S06_convolution.pdf
-
Convolution Arithmetic: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
-
Normalization: https://arthurdouillard.com/post/normalization/
-
Cross entropy loss: https://gombru.github.io/2018/05/23/cross_entropy_loss/
-
How CNNs work: https://brohrer.github.io/how_convolutional_neural_networks_work.html
-
Image processing and computer vision: https://openframeworks.cc/ofBook/chapters/image_processing_computer_vision.html
-
“Yes you should understand backprop”: https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b
-
BERT state-of-the-art language model for NLP: https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270
-
Hubel and Wiesel: https://knowingneurons.com/2014/10/29/hubel-and-wiesel-the-neural-basis-of-visual-perception/
-
Perception: https://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Perception
-
Implementing Grad-CAM in PyTorch: https://medium.com/@stepanulyanin/implementing-grad-cam-in-pytorch-ea0937c31e82
-
Intuitive Explanation of Conv Nets :
- Discusses four main operations in ConvNets 1) Convolution 2) ReLU for Non Linearity 3) Pooling 4) Fully Connected Layer for prediction
- Overall process of Training using Backpropagation & Visualizing