I always recommend actually working on some real datasets, especially Kaggle competitions.
When I took the fast.ai course for the first time, I basically went through Kaggle to find an interesting dataset for each task covered in the course. Being someone with a biomedical background, I focused on biomedical datasets. For image classification, I worked on diabetic retinopathy dataset, for segmentation, I worked on ultrasound segmentation, etc. I also focused on ongoing competitions. I applied the tabular module to an earthquake prediction competition. I selected a bad model (just a public notebook ) for final submission, but the fastai baseline model would have gotten me a silver medal in the competition!
I especially focused on the diabetic retinopathy dataset, trying out different things that were discussed in the class (so of course the whole fine-tuning process, making a demo, then trying to improve the data, etc.). A couple months later, I was very lucky that a new diabetic retinopathy competition started and after teaming up with someone, got my first Kaggle silver medal.
My suggestion is to dig into the code. I learned a lot about what’s going on by digging into the fastai codebase. And many of my questions would be answered after walking through the code. Another useful thing is to see how the code corresponds to the math behind some technique. Jeremy will mention this frequently as well, but a lot of techniques with complicated math or terminology will turn out to be fairly simple code snippets.
I don’t know if I answered your question exactly but hope it helps nevertheless