@ady_anr About overfitting, Jeremy says in lesson 2 it’s quite hard to do using fast.ai libraries. He tried changing a bunch of variables to try to get it to overfit just so he could talk about it in class and he couldn’t get it to. If you overfit you’ll generally have worse results with your validation set because your model doesn’t generalize. Maybe you can download some more images and test to see if your current model can classify them correctly, if it has a high failure rate then I’d get more data and retrain the model.
@init_27 Thanks for your blog. This post How not to do fast.ai was one of the inspirations for this thread! Still waiting on the ‘how to do fast ai’ thread
Thanks for reading!
I’m trying a few more ideas and plan on sharing them in my second pass (I intend to do three-currently about to complete my first pass) through the course.
Thanks for sharing your approach-it’s a great way to distilling Jeremy’s advice as well as leaving points for others to pursue.
I did @init_27, but no replies from there yet. Also i feel the reason im getting such high accuracy in the elephant classifier is because both african and indian elephats are classes in the original image net dataset. Have to try out with another example tomorrow. Thanks for the suggestion.
Hey @raimanu-ds, I tried downloading a dataset from kaggle following the steps on your blog. But dont actually understand where to place the kaggle.json file on my google drive.
By default, when i upload a file it goes to ‘/content/gdrive/My Drive/’ but in your blog youve copied it to /root/.kaggle/kaggle.json . How do i do that? Also this is not specified in the medium article that you linked to too. need help!!
To answer your question, after downloading the kaggle.json I moved it into ~/.kaggle folder because this is where the Kaggle API expects the credentials to be located.
I used Google Drive’s UI to create the kaggle folder and move the json file.
If you use a dataset, it would be very nice of you to cite the creator and thank them for their dataset.
This week, see if you can come up with a problem that you would like to solve that is either multi-label classification or image regression or image segmentation or something like that and see if you can solve that problem. Context: Fast.ai Lesson 3 Homework
In response to “Is there a reason you shouldn’t deliberately make lots of smaller datasets to step up from in tuning, let’s say 64x64 to 128x128 to 256x256?”: Yes you should totally do that, it works great, try it! Context: Lesson 3: 64x64 vs 128x128 vs 256x256
Be careful if you follow too rigorously what @jeremy says to do, you’ll have a risk to be 2-3 years earlier than everyone else. That is probably the story of his life.
As the lessons go on the “do this” type advice is becoming more specific and specialized. I’ll keep updating this thread as I complete lessons, but the meat is in the advice in lessons 1 and 2. Most of the stuff I post from lesson 3 forward can be found in the (really awesome) lesson notes that have been made available in the forum.
Lesson 4
If you’re doing NLP stuff, make sure you use all of the text you have (including unlabeled validation set) to train your model, because there’s no reason not to. Lesson 4: A little NLP trick
In response to “What are the 10% of cases where you would not use neural nets”. You may as well try both. Try a random forest and try a neural net. Lesson 4: How to know when to use neural nets
The answer to the question “Should I try blah?” is to try blah and see, that’s how you become a good practitioner. Lesson 5: Should I try blah?
If you want to play around, try to create your own nn.linear class. You could create something called My_Linear and it will take you, depending on your PyTorch experience, an hour or two. We don’t want any of this to be magic and you know everything necessary to create this now. These are the things you should be doing for assignments this week, not so much new applications but trying to write more of these things from scratch and get them to work. Learn how to debug them and check them to see what’s going in and coming out. Lesson 5 Assignment: Create your own version of nn.linear
A great assignment would be to take Lesson 2 SGD and try to add momentum to it. Or even the new notebook we have for MNIST, get rid of the Optim.SGD and write your own update function with momentum Lesson 5: Another suggested assignment
Not an explicit “do this” but it feels like it fits here. “One of the big opportunities for research is to figure out how to do data augmentation for different domains. Almost nobody is looking at that and to me it is one of the biggest opportunities that could let you decrease data requirements by 5-10x.” Lesson 6: Data augmentation on inputs that aren’t images
If you take your time going through the convolution kernel section and the heatmap section of this notebook, running those lines of code and changing them around a bit. The most important thing to remember is shape (rank and dimensions of tensor). Try to think “why?”. Try going back to the printout of the summary, the list of the actual layers, the picture we drew and think about what’s going on. Lesson 6: Go through the convolution kernel and heatmap notebook
Go back and watch the videos again. There will be bits where you now understand stuff you didn’t before.
Write code and put it on GitHub. It doesn’t matter if it’s great code or not, writing it and sharing it is enough. You’ll get feedback from your peers that will help you improve.
It’s a good time to start reading some of the papers introduced in the course. All the parts that say derivations/theorems/lemmas, feel free to skip, they will add nothing to your understanding of practical deep learning. Read the parts where they talk about why they are solving this problem and the results. Write summaries that will explain this to you of 6 months ago.
Perhaps the most important is to get together with others. Learning works a lot better if you have that social experience. Start a book club, a study group, get involved in meetups, and build things. It doesn’t have to be amazing. Build something that will make the world slightly better, or will be slightly delightful to your two year old to see it. Just finish something, and then try to make it a bit better. Or get involved with fast.ai and helping develop the code and documentation. Check Dev Projects Index on forums.
In response to “What would you recommend doing/learning/practicing until the part 2 course starts?” "Just code. Just code all the time. Look at the shape of your inputs and outputs and make sure you know how to grab a mini-batch. There’s so much material that we’ve covered, if you can get to a pointwhere you can rebuild those notebooks from scratch without cheating too much, you’ll be in the top echelon of practitioners and you’ll be able to do all
of these things yourself and that’s really really rare. Lesson 7: What to do/learn/practice between now and Part 2 Bonus: This is lesson 7 and the clip starts at t=7777!
@MadeUpMasters Suggestions: I learned about this cool summary tool on this platform, I suggest that you can edit the top post to make it look better organised
If you do [details="Title displayed]
And then close the [details] <-Put a slash ‘/’ before details here to create this:
Ex:
[details="Lesson 1"]
Ideas here
[/details]
gives:
Lesson 1
Ideas here
It creates a drop down based menu for details.
You can even nest it by:
[details="Lesson 1"]
foo
[details="Point 1"]
bar
[/details]
[/details]
@init_27, I took the liberty to edit your post to show what’s behind the rendered output by simply adding``` ``` around it, it’s much easier to see how to do it that way.