Same subjects as Part 1 v1 or new challenges

Just curious whether this course was going to go through the same challenges as Part 1 v1 or if this was going to be a completely new set of challenges. I know one big change will be using PyTorch instead of Keras.

9 Likes

I think one of the changes mentioned is the usage of ResNets as a starting pretrained model instead of VGG16. Curious too, likewise, to see what gets thrown in the way.

Based on one of the tweets of Jeremy, I believe it’s a complete re-write. I am sure we will still talk about build Image Classification, Transfer Learning with VGG, Resnet etc. Along with NLP with RNN, LSTM. I think if you follow Jeremy on twitter, you will get a pretty good idea on what this course will be all about. He has discussed lots of new techniques to dropouts, differential learning rate etc. I believe one of the reasons he choose PyTorch is the flexibility it allows vis-a-vis Keras. I would suggest don’t get hung up on the Framework. You want it to get out of the way, so that you can run lots of experiments. I am sure if you know how to do things in PyTorch, it will translate to doing well with which ever framework you use for work - TensorFlow, CNTK, MxNet etc… Looking forward to the start of the course.

7 Likes

Everything will be new! :slight_smile:

30 Likes

Can I expect some topics related to Deep Reinforcement Learning or you won’t be covering that?

2 Likes

I would like to see more NLP than the previous version of this course.

I am definitely in for it!

No I won’t. I’m still not convinced about the utility of RL, compared to all the other very useful tools we have.

There will be more NLP this year.

9 Likes

Hi @jeremy

Why aren’t you convinced about the utility of RL? For instance the recent researches by Google Deepmind is showing that RL is a fantastic architecture for many tasks… For example learning to play games from scratch. Recently they came up with something called AlphaGo Zero which they claim to be more powerful and can learn better than AlphaGo.
I somehow felt that deep RL has been a step closer towards AGI… Or is there some sort of hype created over RL?.
I would like to know your opinion regarding this…

Thanks

2 Likes

@jeremy Not convinced even after Alpha Go Zero? :slight_smile:

Not convinced that there are lots of real world tasks that fast.ai students will be working on in the short term that will benefit from RL.

Also AlphaGo Zero, in a sense, doesn’t use RL - unless you consider MCTS a type of RL. It’s exactly what I’m concerned about:

  1. Not an actual real world application
  2. Shows the shortcomings of current RL approaches

I think something will come along that solves the same things RL is designed to solve, and maybe it’ll in some way show some similarities to current RL approaches. We’ll see. But there isn’t really anything I feel like we should be teaching - although the actual approach used by AlphaGo Zero is the closest I’ve seen (the problem here, however, is it requires learning a bunch of stuff with little connection to other DL areas - so feels like it would be a whole different course!)

18 Likes