I followed last year both part 1 and 2 MOOCs and I am all-in since last year about deep learning (MILA on site course, Kaggle, Coursera, CS231n, books, tons of papers, …).
The top-bottom approach used in the fast.ai course is unique and definitely caught my attention from the beginning in 2016. It was the only viable option for me back then to start learning about this incredible subject. Thanks again by the way to @jeremy and @rachel. Unfortunately, I had not enough time to register to P1 V2 as an international fellow even if I wanted to follow it.
I followed lesson 1 and 2 from part 2 videos and my first opinion coming out is that you should rebrand your course fast(er).ai instead of fast.ai ! With some borrowed reference to fast(er) R-CNN
Seriously, even if I didn’t setup explicitely on AWS, Paperspace or Crestle, the initial working setup looks faster than last year even if that was already fast.
The fast.ai library from the top view looks pretty clean and allow an even higher level API compared to the previous teaching wrap over keras used in V1.
On the technical side, I’ll remember some nice well implemented ideas from lesson 2 : learning rate finder, differential learning rate, and a personal favorite, progressive learning from lower to higher resolution to overcome overfitting at high res. Very clever ideas. I can’t wait to try this last trick on medical images …
Kudos for part 2 ! And you got my vote for rebranding to fast(er).ai ! Too bad the url is already taken …