Lesson 3 - amazing insight - min 44:36 of video

This course is beyond amazing. What @jeremy and @rachel have done in putting together the materials and creating the wiki / forums is unbelievable. How can so much awesomeness fit in such a small region of space and not cause a collapse of the universe is beyond me :smiley:

I am just watching the video for lesson 3 and I was blown away by the insight @jeremy shares at 44 min 36 sec. He refers to the theorem that neural networks are universal function approximators and how from that theorem it springs that any architecture* can learn any objective function, any real world mapping of say images to cat / dog labels, however with certain architectures it just might take a much longer time than with others.

I have read a lot about neural nets and listened to quite a few lectures (though am still a neural net newb) but this very simple way of reflecting on such a broad field as neural nets and grounding the reasoning in a consequence of a mathematical theorem is a touch of brilliance. To distill a topic of such breadth into a sentence or two and to express it with such clarity takes genuine mastery. Just those two or three sentences alone have allowed me to look at neural networks in a completely new light and have taken my understanding of them to a new level.

Thank you @jeremy.

2 Likes

I can understand the sense of awe as it was something I was really obsessing over when I first came across the insight.

If you’re interested there’s this really neat chapter that explains mathematically how it all works. http://neuralnetworksanddeeplearning.com/chap4.html

1 Like