An intuitive explanation of the "Deep neural networks are universal function approximators" theorem


(Brian Muhia) #1

Researcher Michael Nielsen recently wrote some research notes about a line of thinking, writing and teaching called “explorable explanation”. In the Magic Paper notes, he discussed and demonstrated a project called Magic Paper, and chose the universality theorem that @jeremy mentioned in Lesson 1 of Part1v2 as an example. I remember @zaoyang asking about the intuition behind this theorem in the Lesson 1 Wiki thread, so here’s the video:


(Rikiya Yamashita) #2

Thanks @poppingtonic, this is awesome :wink:


(Anand Saha) #3

Thanks for sharing.


(Aditya) #4

It’s really cool…
Thanks for the link…


(Chris Palmer) #5

Great place to start and led to some other very cool web sites:


(Zao Yang) #6

thank you


(benedikt herudek) #7

nice video, suprised I couldnt find this theorem discussed more in comparison to computability theorems