Things Jeremy says to do

(Aditya Anantharaman) #21

I did @init_27, but no replies from there yet. Also i feel the reason im getting such high accuracy in the elephant classifier is because both african and indian elephats are classes in the original image net dataset. Have to try out with another example tomorrow. Thanks for the suggestion.

1 Like

(Aditya Anantharaman) #22

Hey @raimanu-ds, I tried downloading a dataset from kaggle following the steps on your blog. But dont actually understand where to place the kaggle.json file on my google drive.
By default, when i upload a file it goes to ‘/content/gdrive/My Drive/’ but in your blog youve copied it to /root/.kaggle/kaggle.json . How do i do that? Also this is not specified in the medium article that you linked to too. need help!!

EDIT


Here’s a great resource to solve the problem.

2 Likes

#23

Hi @ady_anr,

I see you got it to work, well done :+1:t3:

To answer your question, after downloading the kaggle.json I moved it into ~/.kaggle folder because this is where the Kaggle API expects the credentials to be located.

I used Google Drive’s UI to create the kaggle folder and move the json file.

1 Like

(Robert Bracco) #24

Lesson 3

  1. If you use a dataset, it would be very nice of you to cite the creator and thank them for their dataset.
  2. This week, see if you can come up with a problem that you would like to solve that is either multi-label classification or image regression or image segmentation or something like that and see if you can solve that problem. Context: Fast.ai Lesson 3 Homework
  3. Always use the same stats that the model was trained with. Context: Lesson 3: Normalized data and ImageNet
  4. In response to “Is there a reason you shouldn’t deliberately make lots of smaller datasets to step up from in tuning, let’s say 64x64 to 128x128 to 256x256?”: Yes you should totally do that, it works great, try it! Context: Lesson 3: 64x64 vs 128x128 vs 256x256
8 Likes

#25

Hey, great idea ;] Can you edit your first post and add those tips from next lessons. It would be much easier to find them. Thanks

1 Like

(Robert Bracco) #26

Thanks for the input. Done.

1 Like

(Alexandre Cadrin-Chênevert) #27

Be careful if you follow too rigorously what @jeremy says to do, you’ll have a risk to be 2-3 years earlier than everyone else. That is probably the story of his life.

7 Likes

(Robert Bracco) #28

As the lessons go on the “do this” type advice is becoming more specific and specialized. I’ll keep updating this thread as I complete lessons, but the meat is in the advice in lessons 1 and 2. Most of the stuff I post from lesson 3 forward can be found in the (really awesome) lesson notes that have been made available in the forum.

Lesson 4

  1. If you’re doing NLP stuff, make sure you use all of the text you have (including unlabeled validation set) to train your model, because there’s no reason not to. Lesson 4: A little NLP trick

  2. In response to “What are the 10% of cases where you would not use neural nets”. You may as well try both. Try a random forest and try a neural net. Lesson 4: How to know when to use neural nets

  3. Use these terms (parameters, layers, activations…etc) and use them accurately. Lesson 4: Important vocabulary for talking about ML

4 Likes

(Robert Bracco) #29

Lesson 5

  1. The answer to the question “Should I try blah?” is to try blah and see, that’s how you become a good practitioner. Lesson 5: Should I try blah?

  2. If you want to play around, try to create your own nn.linear class. You could create something called My_Linear and it will take you, depending on your PyTorch experience, an hour or two. We don’t want any of this to be magic and you know everything necessary to create this now. These are the things you should be doing for assignments this week, not so much new applications but trying to write more of these things from scratch and get them to work. Learn how to debug them and check them to see what’s going in and coming out. Lesson 5 Assignment: Create your own version of nn.linear

  3. A great assignment would be to take Lesson 2 SGD and try to add momentum to it. Or even the new notebook we have for MNIST, get rid of the Optim.SGD and write your own update function with momentum Lesson 5: Another suggested assignment

3 Likes

(Robert Bracco) #30

Lesson 6

  1. Not an explicit “do this” but it feels like it fits here. “One of the big opportunities for research is to figure out how to do data augmentation for different domains. Almost nobody is looking at that and to me it is one of the biggest opportunities that could let you decrease data requirements by 5-10x.” Lesson 6: Data augmentation on inputs that aren’t images

  2. If you take your time going through the convolution kernel section and the heatmap section of this notebook, running those lines of code and changing them around a bit. The most important thing to remember is shape (rank and dimensions of tensor). Try to think “why?”. Try going back to the printout of the summary, the list of the actual layers, the picture we drew and think about what’s going on. Lesson 6: Go through the convolution kernel and heatmap notebook

2 Likes

(SHARAN VISHNU S) #31

Hey ! It would be great if you could tell us how to upload your own dataset using the data bunch factory methods.

0 Likes

(Robert Bracco) #32

Lesson 7

  1. Don’t let this lesson intimidate you. It’s meant to be intense in order to give you ideas to keep you busy before part two comes out.

Parts 2-5 come from a great speech towards the end of the lesson. I’d highly recommend revisiting here: Lesson 7: What to do once you’ve completed Part 1

  1. Go back and watch the videos again. There will be bits where you now understand stuff you didn’t before.

  2. Write code and put it on GitHub. It doesn’t matter if it’s great code or not, writing it and sharing it is enough. You’ll get feedback from your peers that will help you improve.

  3. It’s a good time to start reading some of the papers introduced in the course. All the parts that say derivations/theorems/lemmas, feel free to skip, they will add nothing to your understanding of practical deep learning. Read the parts where they talk about why they are solving this problem and the results. Write summaries that will explain this to you of 6 months ago.

  4. Perhaps the most important is to get together with others. Learning works a lot better if you have that social experience. Start a book club, a study group, get involved in meetups, and build things. It doesn’t have to be amazing. Build something that will make the world slightly better, or will be slightly delightful to your two year old to see it. Just finish something, and then try to make it a bit better. Or get involved with fast.ai and helping develop the code and documentation. Check Dev Projects Index on forums.

  5. In response to “What would you recommend doing/learning/practicing until the part 2 course starts?” "Just code. Just code all the time. Look at the shape of your inputs and outputs and make sure you know how to grab a mini-batch. There’s so much material that we’ve covered, if you can get to a pointwhere you can rebuild those notebooks from scratch without cheating too much, you’ll be in the top echelon of practitioners and you’ll be able to do all
    of these things yourself and that’s really really rare. Lesson 7: What to do/learn/practice between now and Part 2 Bonus: This is lesson 7 and the clip starts at t=7777!

4 Likes

(Sanyam Bhutani) #33

@MadeUpMasters Suggestions: I learned about this cool summary tool on this platform, I suggest that you can edit the top post to make it look better organised

If you do [details="Title displayed]

And then close the [details] <-Put a slash ‘/’ before details here to create this:

Ex:

[details="Lesson 1"]
Ideas here
[/details]

gives:

Lesson 1

Ideas here

It creates a drop down based menu for details.

You can even nest it by:

[details="Lesson 1"]
foo
[details="Point 1"]
bar
[/details]
[/details]

gives:

Lesson 1

foo

Point 1

bar

4 Likes

(Robert Bracco) #34

Done! Thanks for the suggestion. One weird thing is I can’t get the titles to be bold. Not a big deal but if you have a quick fix let me know.

0 Likes

(深度碎片) #35

Thanks for your sharing, this is super wise! I am translating your post into Chinese here

1 Like

(深度碎片) #36
main point
sub point

Thanks! this sub-nest is very cool!

2 Likes

(Stas Bekman) #37

@init_27, I took the liberty to edit your post to show what’s behind the rendered output by simply adding``` ``` around it, it’s much easier to see how to do it that way.

I’m not sure the nested one works well, as it doesn’t indent the nesting, so it’s just confusing. It doesn’t look like it’s meant to be nested: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details

1 Like

(Sanyam Bhutani) #38

Many thanks!

I think it might be a personal preference, keeping everything else collapsed allows me to focus on just one open tab, hence I suggested that.

I didn’t realise it doesn’t indent/display the nesting, thanks for pointing out.

1 Like

#39

Hi much appreciate Robert and following your advice:

  • silly question from Lesson 1
    I cant seem to get the
    create new cell
    reset kernel
    Shortcuts to work?

Thanks heaps ; )
dgrl

0 Likes

(Robert Bracco) #40

Hey, not a silly question at all. What have you tried and what happened?

There are a number of ways to create a new cell. You can press ‘a’ or ‘b’ to create a new cell above or below your current one (while in command mode so cell is labeled blue and no new text appears when you type). Also hitting shift-enter when in the last cell in the notebook (in command or control) will execute the code and create a new cell below.

Restart kernel should be pressing zero twice (also in command mode). There will be a warning box that pops up.

If you already tried those and it didn’t work then please post here with details. Cheers.

0 Likes