At university I discovered that I got distracted by taking notes - but I felt pressured into doing so because everyone around me was writing pages and pages per lecture. Strange thing was that this had the opposite effect for other people, taking notes helped them focus. So I guess you need to figure out what works best for you.
As soon as I stopped taking notes altogether then I had much better results. That said, there were always slides to refer back to at university. For the fastai course, I usually make about 5 bullet points per lecture. Then if necessary I might go back and watch a specific segment again.
Worth noting that I work in a machine learning engineer role so I already understand the theory. Nonetheless there’s loads of things on this course that I have never seen before - it’s super helpful!
Something I found extremely effective that I’ll share is to convert the videos into audio format and get them on your phone so you can listen to them podcast style when you’re commuting / travelling. For me finding time to sit down and watch a lecture multiple times was really hard, but the lectures work surprisingly well in audio only format if you know the topics.
I’ve probably listened to last years lectures a dozen times at this point, and beyond being a great refresher into the topics themselves, it also regularly gives me new ideas that I can explore while the audio continues to play. I’m basically at the point now where Jeremy’s voice primes me to think deep (learning) thoughts.
I think this is a really important point. There is no one exact way to do fast.ai because we all have different personalities and experience. It’s like trying to build a one size fits all ML model. What is really helpful though is talking about the parameters involved and how you can tweak them to optimize your learning. For instance, before this thread some people may not have considered watching the lectures at faster speeds, or converting the lectures to audio for listening in the car/gym. So let’s keep the ideas coming!
I’ve inserted a link to it in the Feedbacks from existing study groups wiki post. It would be great to write the best practices of how to study at home in a guide for participants.
This post is from a new comers perspective and more related to my personal experience. I joined fastai v2 part2 and did formulate a study plan last time. I failed, mainly because of the reasons stated below.
Didn’t take part in v2 part1 actively. Thus, was a bit late when I realised the most important thing of all… “code, code, code” as jeremy says in one of the lectures. I was swayed by the concepts (cool stuff) in deep learning, which made me to devote 90% of my time in reading papers, books, blogs and the rest in ‘actually’ writing code.
Didn’t share my work with the community. To be honest I was just overwhelmed by the work and discussions every one was presenting in the forums. Till then, I haven’t been exposed to such a community before and was also not really comfortable online. Since, there was no capstone project, I kind of went all over the place. (A study group would have definitely helped)
Ideas that might help.
Start replicating the v3 part1 notebooks on other datasets, if you haven’t already.
Get comfortable with python and pytorch.
Make personal notes of concepts/code snippets you forget the most. (This helps a lot)
The most interesting part for me in part 2, is converting math/intuition into code. So, try to do some pythonic scientific programing exercises if you want to feel more confident.
I helped some of my friends on their deep learning projects. If you know anyone who is doing their UG/PG/Phd, reach out to them and volunteer to solve a problem statement. We are going to dwell into SOTA deep learning practices. You’ll be surprised on how much you can contribute during/after this course.
Have a final goal in mind. Like, your final year project or the upcoming client requirements or a kaggle competition. This will help you in realizing a capstone project.
(I believe this is a must) Join a study group. For me, scribbling on whiteboard with different ideas works the best. Being a part of my local study group and the intuitive discussions we have were some of the motivations to complete the course. Make sure your participate actively in your study group or it would be the same as in; going through the lectures and being active in forums.
There’s a bunch of online youtube to mp3 converters out there. I’ve had issues with most of them at various times, so I can’t really recommend one over the others. They’re generally used to grab songs so some of them don’t handle the long sessions as well and some of them seem to work intermittently.
For Android mobile: Videoder is pretty good (not on play store though, have to manually install apk). Offline download of Youtube videos with or without mp3 ripping of the videos.
A lot of great ideas here, not much to add to them.
The one thing I’d like to offer from my experience is that I have never learnt much from running dog classifiers or identifying digits. In fact, the same examples used over and over were at some point pushing me away from all this. Even though I understand their importance in offering a simple and concise example, they won’t help you take the next step in the learning process.
My way of learning has been a sort of transfer knowledge process where I take insights from the awsome work Jeremy and the team are doing, trasnfer it over to my domain and its specific problems that move me, and develope ‘unique’ (to my domain at least) solutions to all that.
It has been the only way for me to not only absorb or memorize new knowledge but really make it mine. Hope it works for you.
This discussion is quite lively… so I am thinking that maybe some of you might be looking for mini projects to get your hands dirty with training…
More information in this post. tldr: I think the datasets by fastai are an underappreciated resource. Started to create a starter pack for imagewoof. Was thinking of keeping the repo private as I work on this but am thinking that maybe someone might find a use for it already, maybe as a refresher before part 2…
Anyhow - whether you use this code or not is immaterial, but do check out the datasets here. They are a super valuable learning resource!
@radek This is definitely as good exercise/mini-project. I had completely missed that something like this can also be undertaken. Will take up one dataset at a time and update my GitHub
I need a project first and then go through the lessons. I think my biggest learning has happened after struggling through Kaggle competitions.
It forces me to work on something
I can compare with others
I really see what appears to work and what doesn’t. I have made several models with a bad validation set that look to perform really well, but are horrible)
Working with tabular data I have probably, looked through Rossmann and the Adult Salary set a dozen times, which has gotten me about 80% there. The last 20% is a struggle to get rid of bad habits (not creating a smaller subset while testing architecture, model, feature engineering) and items I don’t understand/ask on the forums (categorylist vs floatlist, custom metrics, embeddings)
What I hope to give back more this round are some other starter packs for kaggle, similar to planet. For others to work and experiment through.
What a time to be alive!
Just a few hours from the First lecture of Part 2 and I thought I should share my thoughts again here.
I spent the last few weeks as planned:
I don’t know how but somehow I managed to pull together all of the things that I had planned to do. Maybe it’s because I kept my hopes on the lower end but that’s okay I think I will make sure not to wear myself out this time. During the last Part 2 runs, I would try to run too hard to keep up and then just fall flat on my face and give up on the lectures. I’m determined not to do the same this time.
Managed to complete 3 runs of fastai, 2 all thanks to the twimlai meetups and 1 because, I had free weekends
So far I’ve kept up with the Lesson discussions from part 1 and I hope to catch up or keep up with part 2 discussions. Special points if I can keep up in real time, lesser points if I get scared and don’t keep up and come back later.
I’ve been doing a few souce code deep dives during meetups and presenting papers.
I’m still yet to go back to the mini-ideas and start re-factoring. Mostly because I’m still adding more ideas around it.
A few goals that I’ll settle on for now:
Watch the livestreams live, keep up with the discussions and not just nod to the nb cells but actually ensure that I understand them okay-okay for the first run, I do plan to come back to the lessons later so I just hope to have a better idea than just feel alienated.
I’ve promised to do a paper-a-week summary along with 1 ML hero interview each week, Luckily I teamed up with @lesscomfortable who has been doing a paper a day summary and it led me to writing 8 paper summaries in the past 6 days I hope to stick to 1 paper a week after the course starts if not with a speed that is as intense speed as of now.
Presenting lesson ideas: I’ve enjoyed learning via presenting via blogging and I’ve really started to realise that by doing a mini-talk during the DS India meetups (hosted by @aakashns) or during the TWiMLAI meetups has really help me deepen my understanding. I’ve somehow managed to do ~30 hours (my goal for 2019 is 100 hours) of presentations and I want to keep doing 1 mini talk or presentation on the ideas that I learn in part 2 each week.
Capstone Project: I’m still yet to fall in love with an idea or a paper that I really want to implement. Again, I’m just a CS undergrad student so that makes me a boring student with neither coding expertise (which is ofcourse my fault) or a cool background. So I’m still waiting for my shower thought but I’ll keep on working on mini-ideas and really come back to refactoring them in the future.
Another idea I’ve realised that I’d really want to stick to is: ensuring that my setup and DL env is clean. I’ve done house cleaning today and updated both my machines to latest fastai and pytorch versions. Setup 2 env(s)- A bleeding edge one, everything installed from source and another for reliability (because I mess up, not because source installation is very hard) where I keep everything conda installed.
Again, We’re all learning to learn so I’ll be thankful for any suggestions/corrections to my approach.
Also for anyone who made it through my boring ideas, incase you’d want to tackle some compute heavy idea and would need some GeForce, please don’t hesitate to reach out to me
I’ve had success so far with this converter. I’ve converted all the Part 1 lectures, and am in the process of converting the Intro to ML lectures. I’d be happy to convert all the old classes to podcast and host them somewhere if @jeremy is okay with us converting them to mp3. Of course I won’t do this for Part 2 until it is officially released. Let me know if this would be okay.