How to do fastai - Study plans & Learning strategies

I have been long searching for the most optimal way to study the fastai courses and make the best out of this transformative opportunity. I have read the forums to see what others have been doing, researched the web for tips & tricks and kept Jeremy’s teaching philosophy in mind while formulating my on-going learning strategy.

My Fastai Study System

My learning strategy has always been evolving and the current iteration looks something like this:

1. Lecture first pass:
The first time where I quickly watch the lecture video at 1.75x speed:

  • Grasp the big picture (top-down approach)
  • Stay active by taking notes or by participating in discussions, if live

2. Note taking strategy:
The idea is for the notes to serve as your central knowledge repository. You must be able to revisit them to revise the concepts or add new items to your deep learning knowledge. It also serves as your dashboard and control centre for the entire course where you can keep pending To-Do items, ideas to explore or questions to ask. An example: Notes-part1_lesson#4

The notes are split into the following sections:


    • concepts and ideas about the subject
    • write down as you watch
    • add as you read, explore or implement
    • compile items across notes, polish and publish

    • general advice and meta ideas from Jeremy
    • write down as you watch
    • add as you read, explore or implement
  • TO-DO

    • items as experiments, challenges, project ideas, blog ideas
    • mark as urgent or later, done or not done
    • update during second pass
    • revisit to pick up where you left, to revise in detail or to go bottom-up

    • notebooks (use debugger), blogs, papers
    • deep learning concepts to study and explore
    • mark as done or to keep exploring later

    • doubts on topic & curious questions out of topic
    • add answers or follow-up questions from your own study & research
    • then post question to forum and update with definitive answers

3. Lecture + Notes second pass:
This stage can be repeated as many times as required, even after the course is completed. Remember to use your notes for the “context switch” and as the central control panel.

  • Watch selected parts of the lecture for details (bottom-up approach)
  • KEY POINTS + ADVICE: update sections with new knowledge
  • READING & EXPLORING: complete the essential items from this section
  • QUESTIONS: Work out this section
  • TO-DO: complete the pending items
  • Update the Notes and repeat


Some of the posts and blogs that I have found to be very useful and informative in this regard:


How to update this system for Part-2?
@mandroid6 @arjunrajkumar @MadeUpMasters @init_27

Part-2 is oriented towards extensive research and programming. We need improvements in:

  • Research: read and implement papers
  • Coding: read, understand and improve fastai library

How can we, learners, do a better Part-2?
@jeremy @rachel @sgugger
One idea to get the ball rolling is to have a Capstone Project which could be a kaggle challenge, research paper or major fastai feature development.


A very important thread! My first resolve is to complete the lectures and do the notebooks during the week of the lecture. Last year Part 2 I was so overwhelmed with the level and quality that I had given up. Back to Part 1 in V3, I kept pace with the lectures, did them during the week of the lectures, even though I had not grasped all of the concepts. I also took time and interest to be part of online communities like TWiMLAI and Data Science India. For the first time in my life I started digging deep into fastai and started to share / present what I understood with the community. Also I have revised all the notebooks once more during the break after Part 1.

I am still not an expert by any means on fastai. But I am much more comfortable about fastai and about my ability to work through this library in specific and deep learning in general. I wanted to try the notebooks on different datasets but failed in achieving the same. That is one thing I intend to do religiously this time.

The only way to get anything is to immerse yourself into it. Immersion is dependent on two things - time and quality of work. This will vary for different people depending on how they are able to spend their time and achieve outcomes with their time.

At a bare minimum, one should start. No matter how trivial it may look or sound, one needs to start with a project of his/her own. Then be bold enough to share it with the world. You may get bouquets or brickbats, something will come your way. Build a pipeline of projects to do, the level of complexity increasing marginally along the way. If you can find somebody to partner then that would be ideal. If there is none but oneself, then better start with oneself and don’t wait for the Messiah to come.

When you share your work, participate in communities you will find interesting people to team up with. Don’t hesitate to ask and don’t fret if they don’t agree. When you start you get momentum and it will take you to places you never thought was possible.

Here are some things I would like to do:

  1. Follow people like Jeremy, Sebastian, Rachel, Radek on twitter
  2. Look at papers they mention
  3. Read the summary first
  4. Go to and replicate the paper and code.
  5. Take a new paper and try to implement code from scratch
  6. Do small projects
  7. If possible compete in Kaggle competitions

This message is as much for myself as for everyone else. Thoughts and feedback welcome.


As for the note-taking part, I would actually recommend making only a very high-level list of topics and concept when taking the class. Then, you should write in your own language about each concept and topics individually, not necessarily by course structure, possibly to polish it enough to publish it in Medium.

The idea here is that note-taking can easily turn into passive word transcription. The above approach forces you to make it an active learning experience, as writing in your own words requires active engagement with the content.


Indeed, using the ‘papers with code’ website is a great way to set achievable goals in terms of finding good papers to implement while having a baseline implementation to compare against our own work.

Yes, this is a very practical advice since the pace of Part-2 lectures is very hard to keep up with. Last time, it was quite challenging to take notes alongside attending the lectures, even when I wrote only a phrase or just a word for each concept being discussed. However, the system of coming back, adding and updating the notes in the second pass does take care of this problem. For example I have seen my own notes grow from phrases and words to sentences and small paragraphs as I kept reiterating over them.

Your idea of editing and polishing important concepts and topics to turn them into blog posts must indeed be added to the system. Thanks! :smile:


As Jeremy has suggested to just go through the 1st part to actually be in the touch of all the topics. I strongly believe we can read through the wonderful notes by Hiromi and the class notes and get a refresher but I would say please please go through the 5th lesson very nicely, because it is actual GOLD. There are so many things you can take in from that lecture - explained so easily that I believe you can crack deep learning interviews using that. :stuck_out_tongue:


Perhaps some of the community here could also look at combining some of these notes into a collaboratively edited extra-awesome set of course notes? I think people would still need to create their own notes fairly independently for their own use, but maybe could then combine them together with other people’s notes in some shared doc.

Just an idea - not sure about the details. If anyone is interested in trying it, feel free to start a topic and put something together! :slight_smile:


How I do

  1. I watch the live lectures, jot down things I want to look into further, and interact on the course messages boards (e.g., I “like” questions, ask questions, and answer questions where I can). I do not take a lot of notes.

  2. 2-3 days later I’ll print out the jupyter notebooks covered in class, watch the lecture again, and take hand-written notes in the printed out hard copies. I pause the video frequently during this pass and I run the notebooks with pdb.set_trace() littered throughout. I examine the shape of things as code runs and add that info to my handwritten notes. There have been a number of scientific studies that suggest that taking notes by hand improve recall and retention of things learned … and I can say that for me such has worked far better than typing my notes or highlighting the notes of others.

  3. Study my notes and re-watch any pertinent parts of the lectures that need watching again.

Throughout the course I do these things:

  1. Try to build an application around something I’m learning (last year I built a Android application that detects In-N-Out burgers from non In-N-Out burgers). If all you ever do is build jupyter notebooks, you’ll never learn how to turn this stuff into something that others can use.

  2. Compete in at least 1 kaggle competition

  3. Apply what I’m learning at work in building an end-to-end deep learning system (for me, its all NLP work)

  4. Write 1-2 quality articles with accompanying code that folks can run

  5. Interact heavily on the message boards … asking questions I’ve spent some good time first investigating and formulating myself and also answering questions from others.

That’s it. Post season, I’ll work with as many students on a High School robotics team I mentor in building out ML solutions and application development as are interested.



Great ideas… I’m gonna have to steal some! :grin:

Especially, using pdb to go through the notebooks. Jeremy has mentioned this a few times: “See what goes in, see what comes out” and he has talked about using the debugger as well. Now, I will definitely add it to my study routine.

Besides, I’m indeed guilty of building notebooks and leaving it there. I will start to turn them into something useful or at least something fun. Building end-to-end deep learning system is, I believe, the necessary step to master deep learning.

Thanks a ton for your reply!
It’s always great to know how others do this course.

1 Like

there were some great notes for part 1, some done independently and some as Wiki. I think wiki is great, something even better could be a notes in form of jupyter notebook so you can both read and run the code / experiment…

1 Like

This may not be as relevant for Part II since Part I is a pre-req and people know the grind by now, but for what its worth, here is an old blog of mine on this topic: Deep Learning with

Most suited for those who are new to everything out here! :slight_smile:


Thank you for creating this thread.

I think to be completely honest, I have only figured out the How not to do fastai part yet! I have been a part of the community since 2017 and only now I’ve started to become a little comfortable with the materials.

A few things that I have learned while failing is, to tame my curiosity. I would always venture off into new interests, in tangential directions. I’ve learned (rather still learning) how to stick close to the materials here while only adding more knowledge that gives me a boost.

I’ll definitely focus my efforts into a few things:

  • Presenting ideas: Presenting the lesson’s ideas in the meetups has really helped me actually understand what’s going on. Trust me, the confidence boost when you can answer a peer’s doubts is great! I’ll continue doing at least 1 mini-presentation or 1 complete lesson walkthrough each week.

  • Note-taking. I recently learned medium has created an “unlisted” option so it would be a good test to write notes and share them here each week. Later I can go back and polish them further, I’ve found that helps deepen the knowledge. (Unlisted keeps the blog posts from being released publically)

  • More comfort with fastai source code: I’ve constantly spent time understanding how the machinery is inter-related then breaking the code down, expanding it-understanding the decisions that went into it. I will not say I’m good at it, but with time (I’m doing my third pass before Part 2 launches), I’ve reached a confidence level where at least I can tell you this. I think I’ll keep doing that further. Maybe in a better way once we get into Part 2 and Jeremy explains it.

  • Writing 1 blog post per week (Apart from the lesson notes). I think that’s a target that I’ll commit to. With the interview series, I promised to deliver 1 interview per week and I’ve disciplined myself enough to be able to do that. I think I can do that with DL as well. Hopefully delivering some quality writeups.

  • Research papers: Tbh, I feel like a kid that always eats french fries instead of veggies when it comes to research papers. I’ve started to read the ones highlighted in the later lessons. I’m still not comfortable, I hope to change that once we’re into Part 2.

About capstone project: I happen to be a final year CS undergrad and I think I’d barely a noob pythonist. So it puts me in a funny situation where neither can I say, “I’ve 1 year of coding experience, I can build upon fastai” and at the same time I feel I do not have domain expertise-so I have no idea where do I apply these amazing techniques to. I’m still waiting for a Shower thought like Jason Antic (Creator of DeOldify) to get obsessed with a project and go ahead and build it. Meanwhile, I’ll continue browsing through the forums in hope to team up with someone and work with them. I think kaggle competitions are a great place to learn but not all competitions align with fastai. You’d definitely benifit! But there are a lot of things to learn outside of it (Don’t take my word for it-you might be pretty amazing at it, Maybe I’m too slow)

So here’s my plan for now:

  • Review Part 1 before 18th of March.
  • Create terrible mini-projects. But build them. Later, come back and re-factor them.
  • Once fastai starts, keep up with the forums, lessons.
  • Extra points to me if I can keep the promise of writing 1 good writeup each week on DL.
  • Get comfortable with source code, research papers.
  • Again, go back to mini-ideas and refactor them.

PS, if you’re thinking this guy doesn’t have a life, lol I actually don’t, I’ve completed my coursework for my Bachelor’s degree early and I have another semester to kill so I’m a Full-Time fastai student.

I can be pretty idiotic with my plans, so I’d be grateful for any feedback/suggestions :slight_smile:


I would also add checking out the winning solutions on Kaggle. Some exercises I’ve enjoyed doing:

  1. Annotating someone else’s kernel. Basically take their kernel and explain every line in markdown mode.
  2. Taking a public Kaggle kernel model, studying it, and then try to replicate it as quickly as possible without referencing the original. The penalty for cheating is you have to restart the notebook from scratch.

Previously I have worked through

  • version 1 parts 1 and 2
  • version 2 parts 1 and 2
  • version 3 part 1 to a lesser extent.

All of these were post original presentation, where all the material is readily accessible. So I am not sure how accessible the material will be once the live video has finished. Once the course starts I would have a better understanding of the learning environment so I reserve my comments till then. In the first 2 weeks and then after I will be 9/8 hours adrift and viewing videos in the early hours of the morning and hope they could be viewed post presentation on the chance that sleep gets the better of me.

As several students have commented in the previous version that their greatest insight was through running the code, experimenting with inputs and outputs. I usually create new cells and print out variable values as I work through the notebook. The python editor ‘spyder’ can be useful as it has a variable explorer were you can see it’s name, type, size and value. Creating the same notebook on different data can sometimes detract from the learning if difficulties with the data arise. Creating from scratch the notebook with the same data in the first instance is probably more preferable, graduating to new data second.

I am currently working through an excellent book ‘Bayesian Analysis with Python Second Edition’ written by Osvaldo Martin using PyMC3 and ArviZ, published by Packt> which has opened up a difficult subject. Allied with the final few chapters of the DeepLearning Book which were so eloquently gone through under the direction of a former fastai student Alena Kruchkova. I was hoping to explore this area which I shall put on hold now I have my invite to version 3 part 2. Interestingly as an aside PyMC3 uses Theano as it’s back end for automatic differentiation but being no longer developed (Theano but maintained by pymc3dev) the next version PyMC4 will use Tensorflow as it’s back end, for the life of me I don’t know why they did not go for PyTorch.

I am based in the North West of the UK.
Thanks for the excellent content here


Thanks to everyone for posting your personal study plans for this course. We have heard of many successful Fastai students who have gone on to do great things. But this might give us a glimpse into the system, process and journey of these successes rather than just the outcome.

I think everyone in the community stands to benefit with this knowledge. Keep going!

1 Like

Nice thread! I don’t have much to add, but one thing that comes to mind is keeping a list of your own weaknesses and auxiliary learning needs.

There are so many different technologies being combined here. Jupyter, python, git, regular expressions, libraries like numpy, pandas…etc, and we all come from such varied backgrounds that everyone has different abilities and needs for each of these. I think this creates pitfalls for a lot of students because it leads to them feeling very overwhelmed, and also spending too much time trying to master too many things instead of learning them as needed.

My approach to overcome the feeling of being overwhelmed, and the temptation to put off for 3 months while learning all these other things, was to make a big list of these things, decide how much time they’d take and how valuable they are (in order to prioritize) and then spend 10-20% of my time each week chipping away at them. If I come across something in the course/notebooks that makes me feel like I need something more immediately, I prioritize that in my free study time for the week.

One thing I think would be really helpful for the community would be a “Recommended Learning Resources” thread (I’m happy to start it, or someone else can take the initiative), just like the Python Resources thread, but with the best tutorials for all these auxiliary technologies that become overwhelming to less experienced students. It needs to have a caveat that these aren’t prerequisites and that you should be spending at most 20% of your time on them so that students don’t get sucked into the learning vortex and use it as a way to procrastinate.


Interesting post!

At university I discovered that I got distracted by taking notes - but I felt pressured into doing so because everyone around me was writing pages and pages per lecture. Strange thing was that this had the opposite effect for other people, taking notes helped them focus. So I guess you need to figure out what works best for you.

As soon as I stopped taking notes altogether then I had much better results. That said, there were always slides to refer back to at university. For the fastai course, I usually make about 5 bullet points per lecture. Then if necessary I might go back and watch a specific segment again.

Worth noting that I work in a machine learning engineer role so I already understand the theory. Nonetheless there’s loads of things on this course that I have never seen before - it’s super helpful!

Something I found extremely effective that I’ll share is to convert the videos into audio format and get them on your phone so you can listen to them podcast style when you’re commuting / travelling. For me finding time to sit down and watch a lecture multiple times was really hard, but the lectures work surprisingly well in audio only format if you know the topics.

I’ve probably listened to last years lectures a dozen times at this point, and beyond being a great refresher into the topics themselves, it also regularly gives me new ideas that I can explore while the audio continues to play. I’m basically at the point now where Jeremy’s voice primes me to think deep (learning) thoughts. :wink:


I think this is a really important point. There is no one exact way to do because we all have different personalities and experience. It’s like trying to build a one size fits all ML model. What is really helpful though is talking about the parameters involved and how you can tweak them to optimize your learning. For instance, before this thread some people may not have considered watching the lectures at faster speeds, or converting the lectures to audio for listening in the car/gym. So let’s keep the ideas coming!


thank you for this nice content, as you mentioned PyMC4 will use TFP(Tensor flow probability) as its back end. (

1 Like