This post is from a new comers perspective and more related to my personal experience. I joined fastai v2 part2 and did formulate a study plan last time. I failed, mainly because of the reasons stated below.
Didn’t take part in v2 part1 actively. Thus, was a bit late when I realised the most important thing of all… “code, code, code” as jeremy says in one of the lectures. I was swayed by the concepts (cool stuff) in deep learning, which made me to devote 90% of my time in reading papers, books, blogs and the rest in ‘actually’ writing code.
Didn’t share my work with the community. To be honest I was just overwhelmed by the work and discussions every one was presenting in the forums. Till then, I haven’t been exposed to such a community before and was also not really comfortable online. Since, there was no capstone project, I kind of went all over the place. (A study group would have definitely helped)
Ideas that might help.
Start replicating the v3 part1 notebooks on other datasets, if you haven’t already.
Get comfortable with python and pytorch.
Make personal notes of concepts/code snippets you forget the most. (This helps a lot)
The most interesting part for me in part 2, is converting math/intuition into code. So, try to do some pythonic scientific programing exercises if you want to feel more confident.
I helped some of my friends on their deep learning projects. If you know anyone who is doing their UG/PG/Phd, reach out to them and volunteer to solve a problem statement. We are going to dwell into SOTA deep learning practices. You’ll be surprised on how much you can contribute during/after this course.
Have a final goal in mind. Like, your final year project or the upcoming client requirements or a kaggle competition. This will help you in realizing a capstone project.
(I believe this is a must) Join a study group. For me, scribbling on whiteboard with different ideas works the best. Being a part of my local study group and the intuitive discussions we have were some of the motivations to complete the course. Make sure your participate actively in your study group or it would be the same as in; going through the lectures and being active in forums.
There’s a bunch of online youtube to mp3 converters out there. I’ve had issues with most of them at various times, so I can’t really recommend one over the others. They’re generally used to grab songs so some of them don’t handle the long sessions as well and some of them seem to work intermittently.
A lot of great ideas here, not much to add to them.
The one thing I’d like to offer from my experience is that I have never learnt much from running dog classifiers or identifying digits. In fact, the same examples used over and over were at some point pushing me away from all this. Even though I understand their importance in offering a simple and concise example, they won’t help you take the next step in the learning process.
My way of learning has been a sort of transfer knowledge process where I take insights from the awsome work Jeremy and the team are doing, trasnfer it over to my domain and its specific problems that move me, and develope ‘unique’ (to my domain at least) solutions to all that.
It has been the only way for me to not only absorb or memorize new knowledge but really make it mine. Hope it works for you.
This discussion is quite lively… so I am thinking that maybe some of you might be looking for mini projects to get your hands dirty with training…
More information in this post. tldr: I think the datasets by fastai are an underappreciated resource. Started to create a starter pack for imagewoof. Was thinking of keeping the repo private as I work on this but am thinking that maybe someone might find a use for it already, maybe as a refresher before part 2…
Anyhow - whether you use this code or not is immaterial, but do check out the datasets here. They are a super valuable learning resource!
I need a project first and then go through the lessons. I think my biggest learning has happened after struggling through Kaggle competitions.
It forces me to work on something
I can compare with others
I really see what appears to work and what doesn’t. I have made several models with a bad validation set that look to perform really well, but are horrible)
Working with tabular data I have probably, looked through Rossmann and the Adult Salary set a dozen times, which has gotten me about 80% there. The last 20% is a struggle to get rid of bad habits (not creating a smaller subset while testing architecture, model, feature engineering) and items I don’t understand/ask on the forums (categorylist vs floatlist, custom metrics, embeddings)
What I hope to give back more this round are some other starter packs for kaggle, similar to planet. For others to work and experiment through.
What a time to be alive!
Just a few hours from the First lecture of Part 2 and I thought I should share my thoughts again here.
I spent the last few weeks as planned:
I don’t know how but somehow I managed to pull together all of the things that I had planned to do. Maybe it’s because I kept my hopes on the lower end but that’s okay I think I will make sure not to wear myself out this time. During the last Part 2 runs, I would try to run too hard to keep up and then just fall flat on my face and give up on the lectures. I’m determined not to do the same this time.
Managed to complete 3 runs of fastai, 2 all thanks to the twimlai meetups and 1 because, I had free weekends
So far I’ve kept up with the Lesson discussions from part 1 and I hope to catch up or keep up with part 2 discussions. Special points if I can keep up in real time, lesser points if I get scared and don’t keep up and come back later.
I’ve been doing a few souce code deep dives during meetups and presenting papers.
I’m still yet to go back to the mini-ideas and start re-factoring. Mostly because I’m still adding more ideas around it.
A few goals that I’ll settle on for now:
Watch the livestreams live, keep up with the discussions and not just nod to the nb cells but actually ensure that I understand them okay-okay for the first run, I do plan to come back to the lessons later so I just hope to have a better idea than just feel alienated.
I’ve promised to do a paper-a-week summary along with 1 ML hero interview each week, Luckily I teamed up with @lesscomfortable who has been doing a paper a day summary and it led me to writing 8 paper summaries in the past 6 days I hope to stick to 1 paper a week after the course starts if not with a speed that is as intense speed as of now.
Presenting lesson ideas: I’ve enjoyed learning via presenting via blogging and I’ve really started to realise that by doing a mini-talk during the DS India meetups (hosted by @aakashns) or during the TWiMLAI meetups has really help me deepen my understanding. I’ve somehow managed to do ~30 hours (my goal for 2019 is 100 hours) of presentations and I want to keep doing 1 mini talk or presentation on the ideas that I learn in part 2 each week.
Capstone Project: I’m still yet to fall in love with an idea or a paper that I really want to implement. Again, I’m just a CS undergrad student so that makes me a boring student with neither coding expertise (which is ofcourse my fault) or a cool background. So I’m still waiting for my shower thought but I’ll keep on working on mini-ideas and really come back to refactoring them in the future.
Another idea I’ve realised that I’d really want to stick to is: ensuring that my setup and DL env is clean. I’ve done house cleaning today and updated both my machines to latest fastai and pytorch versions. Setup 2 env(s)- A bleeding edge one, everything installed from source and another for reliability (because I mess up, not because source installation is very hard) where I keep everything conda installed.
Again, We’re all learning to learn so I’ll be thankful for any suggestions/corrections to my approach.
Also for anyone who made it through my boring ideas, incase you’d want to tackle some compute heavy idea and would need some GeForce, please don’t hesitate to reach out to me
I’ve had success so far with this converter. I’ve converted all the Part 1 lectures, and am in the process of converting the Intro to ML lectures. I’d be happy to convert all the old classes to podcast and host them somewhere if @jeremy is okay with us converting them to mp3. Of course I won’t do this for Part 2 until it is officially released. Let me know if this would be okay.
Hey, sorry I slacked on this, they exceeded the filesize on github and I was waiting until I’d set up a personal website to host. If you know a good place to share them freely please let me know and I’ll upload them.
Hey guys, sorry for any delay on this. I’ve gone the google drive + GitHub route. The Part 1 2019 audio is up. I’m going to add the intro to ML and Part 2 2018. I’ll add Part 2 2019 upon request (or accept PRs from others who do it).
So far having listened to a few of the lessons for Part 1 in audio only mode, I have to say I’m not a huge fan of listening podcast style. There are definitely certain parts that do really well as audio (general advice, talking about specifics like heuristics for tuning learning rates…etc) but for the most part it doesn’t seem like a really effective use of time. The argument from most people would be that you can do it while doing other stuff, but I personally prefer going really hard at learning for a fixed period daily and then letting it go to do stuff away from the computer and then come back refreshed. To each their own, so if this method helps you than the repo will be there!
Something that has helped me with fastai has been to re implement each notebook, take it by the week instead of being in a rush to complete the course. I take fastai as a marathon and not a sprint and make sure that I finish and finish strong.
Write self notes and do research on things that are hard to understand and also spend a lot of time writing code and digging into fastai source code.
Now that part-2 is out, following this approach of doing one lesson per week or less but doing it really well is going to be my focus. Also doing homework and following the public mooc as though I am in class, really helps me. Fastai is overwhelming in terms of knowledge and there is so much to learn that if we hopped on from first lesson to another without spending time on a lecture first, it would be really hard to carry forward that knowledge.
Yes and no. It was built on an older version of the library, so some things may not port over well, but what Jeremy teaches there is still relevant today. Eventually we will get some more narrowed down courses (like the NLP one happening now, and a GAN one in the works) but for now it is an excellent course to go through to understand the advanced topics.