@Ducky Don’t worry it comes with time and practice. This is my second time around taking part 2 and I’m just now starting to understand some key concepts. I’ll share three things that have really helped me this time around.
First, I’ve been pulling the lectures off youtube onto my phone so I can listen to them as I bike to work. I’ve got an hour long commute (East Van to Richmond) and Jeremy’s voice is pretty much all I listen to all week long. Hearing the lectures for the second or third time makes a huge difference in terms of catching some of the key points or understanding them.
Second, I got much more comfortable with the python debugger, and I started throwing breakpoints all over the code so I could look at the vectors and what’s being passed at each point.
And finally and probably most importantly I built a neural net end to end that wasn’t based off of any of the models that Jeremy taught. This meant I had to build a custom dataset, data model, data loader class, a custom model class, and a custom loss function. The debugging of that was painful and that’s where pdb really shone. Now I feel much more comfortable with what’s going on underneath the hood.
I know it feels overwhelming at first, and I struggled my first time through as well, but I’d suggest focusing on understanding one project really well rather than trying to understand it all. The lectures are there to come back to once you’re ready, and the forums stay active after class is through for questions, especially when it’s opened up to the public.
Keep at it! Jeremy’s mentioned a few times that the most important component to success in this field is persistence.