This is a wiki post - feel free to edit to add links from the lesson or other useful info.
Really excited about the next part! would love to deep dive into NLP!
I’m a couple of chapters behind, but now that the end has been announced I’ll be sure to time block and catch up.
I saw that you mentioned something about revisting NLP on your FullyConnected talk recently. Is there going to be a next part / part 3 on NLP as @SamuelNM is suggesting above ?
Alright, I sneak peeked the first couple of minutes. Looking forward to finish part 2 properly over the summer and be ready for the part 3.
Thank you for everyting!!!
Thanks Jeremy for Part II. It is an amazing course with a deep-dive into the foundations, the miniai framework and debugging best practices among others.
Looking forward to delving into the next part of the course, NLP and GPT4. Would be great to have an AMA some time during the next part of the course.
Wow! That was quite the journey. I can’t believe we started back in October. I had more time to engage in the first 2-3 months and more recently I was mostly watching the videos and only taking notes. I have a lot to dig into and go back over. This was my first time taking part in a FastAI course and I absolutely loved it. I learned so much. I am super excited for the NLP course too and will definitely be there. Thanks for all your hard work and having the patience/tenacity to get to the very end.
I’ve got some catchup to do myself but I’m much smarter than I was at the start already
@jeremy when do we expect the start of the NLP modules? even a rough timeline should help us plan out the next days/weeks!
Big love to the teaching team you guys are so great for patiently teaching some fairly complicated material and lifting the community.
You could always review the FastAI 2019 NLP course (fast.ai - new fast.ai course: A Code-First Introduction to Natural Language Processing )
Obviously slightly dated but good for the fundamentals.
Don’t expect them at any particular time, or have any expectations as to what they contain or how they work – it’s all totally up in the air at the moment since we’re still in pre-planning stages. It could be weeks, months, or years!
I am also a couple of lessons behind, but that was a fantastic course. Thank you, guys!
Can we also please have Johno’s colab notebook? I wasn’t able to find it. The resource from HF course is different from what Johno presented.
You should check out Andrej Karpathy’s YouTube channel. He has 6 lessons building language models and working toward building a small GPT transformer. Andrej Karpathy - YouTube
Part 1 Duration: 12:34:02
Part 2 Duration: 31:19:08
Both combined: 43:53:10
Congrats on this Great course!
A poem by ChatGPT:
Fastai Part 2, done and through,
Jeremy Howard, we owe it to you.
Thousands of lines of code we wrote,
With every neuron, we learned and smote.
We trained the models with the best of pace,
Fine-tuned and tested with a confident grace.
Our deep learning skills now stand to bloom,
A special thanks to you, o’ Master of the room.
Now as we move ahead to embrace,
New challenges, we cannot replace.
Our hunger to learn, still burning so bright,
Our next step would surely seek new heights.
NLP, the study of words and speech,
Creating Chatbots is its primary reach.
We yearn to understand the ChatGPT,
And for that, we need you, Jeremy, can’t you see?
So once again, we seek your guidance and skill,
To help us grow and further instill.
The knowledge and wisdom that you possess,
In our quest for learning, may we progress.
I think this has been a really monumental effort by Jeremy, Johno and Tanishq. Its been bad enough for us to keep up with the course and we are not doing the research and development that been going into it. Personally I think it will take another couple of months for me to properly finish the course but its been great and I have certainly learned a lot. I will also look forward to the part three when if happens
It is absolutely fantastic. I have now vacation from work and will be putting in extra effort to study.
Hi all, first time poster here. Amazing content, thanks for the hard work of everyone involved!
I have a question related to the usage of latents, specifically about using only the means to train the diffusion model.
In the context of data augmentation, wouldn’t it make sense to also use the variances and not just the means so that instead of presenting the same latents, the data loader would get a sample according to the means and (co)variances for each image. This way we have some augmentation “for free” and this type of augmentation is also meaningful for the VAE.