Hello everybody!
Joseph Anuff from Vallejo CA, wanting first off to thank everyone for being such an inspiring student body. Obviously, Jeremy and Rachel and their developers deserve all conceivable props for making DL accessible and comprehensible—I’d already rate fast.ai the most format-validating MOOC ever, and they’re clearly just getting started—but this is still really challenging stuff. The sheer breadth of material seems to find the beginner in all of us. But seeing so many intellectually-accomplished people willing to be confused (and/or humanely instructive) in public gives a privately confused person like myself a lot of hope!
My background is in Web 1.0, Web 2.0, and TV production, usually in a producer role and mostly in the ‘90s and ‘00s. The ‘10s have been drastically less public or prolific, as I’ve pursued a more hands-on technical mastery of various layers in the media stack. For the last five years or so that’s taken the form of full-stack JS experimentation, sometimes for clients but self-directed, as much as possible.
By the time v3 dropped this year, I’d already binged on every fast.ai lecture (minus the last 5 Linear Algebra videos) and was about to start my second pass. (Since discovering fast.ai late last summer, I’d also consumed other common course-loads, biggest impacts being Andrew Ng, Data Camp, and Khan Academy. In years past, I’ve also taken EdX courses in Data Science with R.)
Over the last six weeks I’ve listened to each lecture five times: twice at 1.5-2x speed for familiarity, once with @hiromi’s excellent notes, once running the iPython notebooks, and once taking notes by hand myself. I tried maybe 5 of the server platforms, mostly successfully, before semi-settling on GCP. I assembled and trained an itchy plant classifier, but didn’t use it for my Render.com deployment because seeing some similar prior student projects put mine in turnaround. I’ve read all the lesson threads (in-class and advanced) at least once.
And after all of that, is Pt. 1 crystal clear? Well, I’d probably choke on a pop quiz, and that part where Jeremy recommends we be able to rebuild notebooks from scratch is not yet mission accomplished. But I’ll be damned if “The Matrix Calculus You Need For Deep Learning” isn’t starting to make perfect sense—where a month ago it honestly made me nauseous. And I’ll say this: by the fifth playback, Lesson 7 stood out as easily the most action-packed, idea-provoking lecture I’d ever heard.
Just looking at the subjects in the “Coming Up: Pt. 2” slide, or Jeremy’s more recent notes at fast.ai, gives me a classic Christmas morning rush. I can’t wait! Looking forward to meeting some of you in person, and all of you here in the forums.