Things Jeremy says to do

I’m in :slight_smile:

There’s a typo in your last bullet point. I think you meant classes or categories instead of classifiers.

thanks a lot for this post @MadeUpMasters . I just started the course yesterday and am not sure whether the method i’m following is the right one. How about we keep updating this branch after every lesson and talk about what approach we followed and what we would like to do differently and how it would help.
Currently i finished the Lesson-1 video and spent an hour or so running the ipnb. Next: have to collect a few images to train a classifier and also get familiar with the syntax of fast.ai library by going through the docs.

2 Likes

what dataset are you guys planing to train your model on for the Lesson-1 assignment?

1 Like

I just went through the lesson-1. There was no mention of assignment. Can you please point me to where the assignments are listed?

1 Like

Not exactly assignments @yuvaraj. I meant training out own models using self curated datasets which jeremy had adviced to do.

@raimanu-ds Changed classifiers to categories. Thanks for pointing this out!

@ady_anr Sounds great, I’m in. I took the following steps after Lesson 1:

  1. Ran the lesson 1 notebook step by step
  2. Thought about interesting but simple datasets I could run notebook 1 on (decided on fruits)
  3. Searched for how to get my own dataset and came across this post Tips for building large image datasets. Awesome post but it didn’t work for me on Paperspace gradient so I wasted a lot of time trying to install stuff to make it work.
  4. Discovered lesson2-download.ipynb in the notebooks, this was a gamechanger and what I’d recommend everyone do to implement their first dataset.
  5. Choose a simple problem. I chose alligator vs crocodile and I couldn’t get great results (22% error rate), so I took a step back and chose objects that are easier to distinguish. Apple vs Papaya. If that works, I’ll go a level harder (either back to alligator/crocodile, or fruits but with more classes), if not I’ll go a level easier(e.g. apple vs bear haha but let’s hope it doesn’t come to that) or post on the forum for help.

@yuvaraj I’d suggest you setup a GPU using the instructions at course.fast.ai -> server-setup, then do steps 1, 2 and 4 above.

7 Likes

@MadeUpMasters sounds to me like you’ve done plenty for lesson 1 - I’d suggest moving to lesson 2 at this point. You should generally plan to go through the lessons 2-3 times, going a bit deeper each time, since stuff you learn later will help clarify things earlier in the course.

17 Likes

Thats great. I’m planing on doing a pulses classifier. I wanted some task that is generally hard for a human.But dont know how well that’ll work out because learning to recognize pulses does not require the higher level trained-features of the res-net . Will let you know my progress by tomorrow.

Hey guys. I went through the lesson ipnb. After which i created a dataset consisting of elephant pictures organised into 2 folders. Indian elephant and African elephant.

I chose this topic as even for a person who is used to seeing elephants, differentiating an african one from an indian one is a pretty tough task.

When i trained on a total of 20 images, and the accuracy i got is 100% within 6 epochs.
I think the model is overfitting. How do i check this and if yes how do i solve the problem.

I had set valid_pct to 0.2 hence my validation folder contains 4 images. Working on a bigger dataset currently.

Please do give your suggestions and feedback.:grinning:

1 Like

Just out of curiosity, the difference between those 2 types of elephants :elephant: is the size of their ears, right ?

Sorry I can’t answer your question about overfitting as I am not there yet :expressionless:

Yes @raimanu-ds the ears and also the trunk looks a bit different.

That’s great @MadeUpMasters :+1:t3:

As for me, I was surprised how relatively easy it was to create our own image datasets using the tips in the post your linked to. However, I struggled with uploading the data into the DataBunch factory methods.

Eventually, I figured it out and moved on to create a classifier that could recognize 40 characters from the TV show ‘The Simpsons’. I explained the various steps I took to implement it in Google Colab here.

It’s been really interesting so far and I am quite satisfied with the results (even without fine tuning the model). I started looking closer at the results and noticed some images were mis-labeled for instance. As more is explained about this topic in Lesson 2, I think I will move on to this lecture.

Let’s keep in touch guys! :grin:

https://raimanu-ds.github.io/tutorial/can-ai-guess-which-the-simpsons-character/

4 Likes

That’s a great blog you wrote @raimanu-ds. Thanks for the detailed explanation on how to download kaggle datasets onto colab.

:blush: :blush:

@ady_anr You may want to share your project in the Share your work thread-you might be able to find more feedback there.

Regards.

1 Like

Anybody with thoughts on digging into the docs? I’ve noticed a lot of the classes and methods are new. Like ImageItemList vs. ImageDataBunch. I’m only through the first 2 lessons so far. Does it make sense continuing to understand the classes via docs? Or does he go into them a bit more later?

I’m not sure the best way to structure the thread, but for now I’ve removed “lesson 1” from the thread title and I’ll make a new post for each lesson so that if people click “summarize this thread”, the ones people find useful will float to the top. Let me know if there’s a better way to structure it. Now, here’s what Jeremy said to do in Lesson 2.

Lesson 2:

  1. If forum posts are overwhelming, click “summarize this topic” at the bottom of the first post.

  2. Please follow the official server install/setup instructions, they work and are easy.

  3. It’s okay to feel intimidated, there’s a lot, but just pick one piece and dig into it. Try to push a piece of code, or learn a concept like regular expressions, or create a classifier, or whatever. Context: Lesson 2: It’s okay to feel intimidated

  4. If you’re stuck, keep going. See image below! Context: Lesson 2: If you’re stuck, keep going

  5. If you’re not sure which learning rate is best from plot, try both and see.

  6. When you put a model into production, you probably want to use CPU for inference, except at massive scale. Context: Lesson 2: Putting Model into Production

  7. Most organizations spend too much time gathering data. Get a small amount first, see how it goes.

  8. If you think you’re not a math person, check out Rachel’s talk: There’s no such thing as “not a math person”. My own input: only 6 minutes, everyone should watch it!

2 Likes

Jeremy, thanks for the reply and advice!

@ady_anr About overfitting, Jeremy says in lesson 2 it’s quite hard to do using fast.ai libraries. He tried changing a bunch of variables to try to get it to overfit just so he could talk about it in class and he couldn’t get it to. If you overfit you’ll generally have worse results with your validation set because your model doesn’t generalize. Maybe you can download some more images and test to see if your current model can classify them correctly, if it has a high failure rate then I’d get more data and retrain the model.

@init_27 Thanks for your blog. This post How not to do fast.ai was one of the inspirations for this thread! Still waiting on the ‘how to do fast ai’ thread :grinning:

3 Likes

Thanks for reading! :slight_smile:
I’m trying a few more ideas and plan on sharing them in my second pass (I intend to do three-currently about to complete my first pass) through the course.

Thanks for sharing your approach-it’s a great way to distilling Jeremy’s advice as well as leaving points for others to pursue.

Regards,
Sanyam.

1 Like

I did @init_27, but no replies from there yet. Also i feel the reason im getting such high accuracy in the elephant classifier is because both african and indian elephats are classes in the original image net dataset. Have to try out with another example tomorrow. Thanks for the suggestion.

1 Like