Researcher Michael Nielsen recently wrote some research notes about a line of thinking, writing and teaching called “explorable explanation”. In the Magic Paper notes, he discussed and demonstrated a project called Magic Paper, and chose the universality theorem that @jeremy mentioned in Lesson 1 of Part1v2 as an example. I remember @zaoyang asking about the intuition behind this theorem in the Lesson 1 Wiki thread, so here’s the video:

# An intuitive explanation of the "Deep neural networks are universal function approximators" theorem

**Benudek**(benedikt herudek) #7

nice video, suprised I couldnt find this theorem discussed more in comparison to computability theorems