An intuitive explanation of the "Deep neural networks are universal function approximators" theorem

Researcher Michael Nielsen recently wrote some research notes about a line of thinking, writing and teaching called “explorable explanation”. In the Magic Paper notes, he discussed and demonstrated a project called Magic Paper, and chose the universality theorem that @jeremy mentioned in Lesson 1 of Part1v2 as an example. I remember @zaoyang asking about the intuition behind this theorem in the Lesson 1 Wiki thread, so here’s the video:

11 Likes

Thanks @poppingtonic, this is awesome :wink:

1 Like

Thanks for sharing.

1 Like

It’s really cool…
Thanks for the link…

1 Like

Great place to start and led to some other very cool web sites:

2 Likes

thank you

1 Like

nice video, suprised I couldnt find this theorem discussed more in comparison to computability theorems