Share your work here ✅

:+1: @mrfabulous1

1 Like

I did a similar thing on softball and baseball and got an accuracy of around 92%

1 Like

I created a model for differentiating between logo of different automobile brands. Had 600 training images and 130 validation images across 8 different brands. ResNet34 had accuracy of 96.3% with just a few iterations. Then i tried it with resNet50 which surprisingly even after 100 iterations has accuracy of just 70%. Any clue?
image
training after unfreezing and after 100 iterations:
image

Hi. Just came across your post when looking around this super informative thread. Well done on the awesome results! Do you still have the dataset for this challenge? Is it available to share for further research? Thanks.

Hello everyone! I started to solidify my learnings by extensively studying papers, and writing posts on them. Here is my first attempt -

Please leave a clap if you like it, thanks :smile:

3 Likes

Hi everyone, after lesson 2 I created my own image classifier to differentiate between 10 different medications: Allopurinol, Atenolol, Ciprofloxacin, Levothyroxine, Metformin, Olanzapine, Omeprazole, Oxybutynin, Prednisone, Rosuvastatin. The specific strength of each med is noted in the notebook

As a former pharmacist turned software developer, I thought it would be interesting to see how a ML model would perform.

I sourced images from US National Library of Medicine’s Pillbox and google images. As you can tell, google images included quite a bit of junk images.

After cleaning up the data and experimenting with epochs and learning rates, I trained resnet34 on the final dataset.

learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(5, max_lr=slice(1e-3,1e-2))

The model had an accuracy rate of 63%. Here’s the notebook on github

Steps for improvement include getting more images, and discarding more junk images.

7 Likes

I built a simple language model based on whatsapp chat data for part 1. I decided to go back, make some improvements and write a medium post about it all.

I’ve now completed part 2 so I wanted to see if I could go back and use what I’ve learned to add a custom rule to the tokenizer, as well as clearly explain what’s going on with the language_model_learner. With a bit of digging through the documentation and source code I was able to do it!

1 Like

Here’s a writeup I did for a competition I participated in on the Zindi competitive data science platform. The object of the contest was to use remote sensing imagery from different timepoints to classify what types of crops were growing in fields with given boundaries. I used a U-Net style approach and I found a really nice library called eo-learn to help with the processing pipeline for the remote sensing data.

6 Likes

Cricket vs Tennis ball

This is my second pass (had done V1 last year partially). Being and Indian, I had to do something with cricket. While we use both types of balls to play cricket, thought of asking machine to learn the differences (while we ignore those during the cricket session :slight_smile: )

A big shout out to @melonkernel CHRISTOFFER BJÖRKSKOG and his post for cleaning up the wrong images and generating URL list via a bookmarklet

Did this using Google Collab
So… was able achieve 98% accuracy even before unfreezing the laters… so just sharing it here.

PS: one of the images look as stupid as my silly experiment… hopefully I will be able to find something really useful eventually :slight_smile:
image

1 Like

Hi mkulkhanna
Great job.
You presented a complex subject in an easy to digest style.
Cheers mrfabulous1 :smiley::smiley:

1 Like

Hi, I entered a Kaggle competion for number recognition and I am currently ranked (11th, top 3%), this is my first Kaggle competition.

My profile: https://www.kaggle.com/rincewind007

5 Likes

For those interested, an energy kaggle competition opened up this week and I have a fastai 2.0 starter code available here

5 Likes

Hi

I tried a simple CNN on Kaggle, https://www.kaggle.com/mgtrinadh/basic-fastai-model, to try hands-on & understand on how to improve accuracy.
Am on lesson-5 & am all excited to try out a kaggle problem!

Best
Gopi

Just finished the second lecture. I made a model for classifying between East Asian traditional dresses - Kimono, Hanbok and Qipao. I was able to achieve 95% accuracy . On retraining the models several times, I kept on getting different Learning rate curves. I am not sure why that happens but I will be asking that on the forums after this post.

The model could classify a Qipao correctly. However, some times it classified a Hanbok as a Kimono.

Web link to the production : Traditional Dress Classifier

Detailed Medium Blog Post : https://medium.com/@varundhingra004/classifying-east-asian-traditional-dresses-using-deep-learning-and-computer-vision-59dc71f97d77

3 Likes

Hi to all!

I used the tutorial on how to download pictures from google images to assemble a dataset, to train a so-called “Baby Monitor”. Its purpose is to take a picture of a baby in the crib and predict if the baby is sleeping, sitting or standing.

As you can see in the picture I got a 13% error rate, which is pretty good considering that I only used the defaults (the model was a pretrained resnet34 architecture) and my training set is around 170 images for three categories.

I have two questions:

  • In my experiments most of the time the validation loss was lower than the train loss. Why is this happening and is it that bad?
  • Is it really that easy to get such good results?! :slight_smile:

nicely done! Also nice medium article. I have never tried doing that, let me try that. I am also experimenting with Lesson 2 & 3 currently.

Few queries

  1. Which tool did you use for images? How large was the final training/validation dataset? My experience was that out of 500 Google Images URLs for each class I only got 50-70 images which are valid, rest were deleted as invalid.
  2. I am trying to make my model tell me “I don’t know” when someone gives an irrelevant image. Would you want to collaborate? (as you can see your deployment can also help from that so it can tell something is not even a dress :slight_smile:)

Hope I am not too nosy!

Hey Dimitris
This happens with me as well. After searching the net for answers, I found that there are 4 main reasons for validation loss being lower than training loss as listed below. Usually this happens when you have a small dataset or you train for too few epochs as random error is high in such cases. We can expect a small margin of error due to reasons 1 and 2 also as mentioned below. My understanding is that if the validation loss is equal to or slightly more than training loss then it is the best scenario.

The 4 reasons-
Credits

1: Regularization is applied during training, but not during validation/testing.
2: Training loss is measured during each epoch while validation loss is measured after each epoch. On average, the training loss is measured 1/2 an epoch earlier.
3: Your validation set may be easier than your training set or there is a leak in your data/bug in your code. Make sure your validation set is reasonably large and is sampled from the same distribution (and difficulty) as your training set.
4: You may be over-regularizing your model. Try reducing your regularization constraints, including increasing your model capacity (i.e., making it deeper with more parameters), reducing dropout, reducing L2 weight decay strength, etc.

2 Likes

Hello everybody,

I want to share some work from the previous months:

Thank you all for the great & encouraging community! The last months I really learned a lot from you! :smiley:

4 Likes

Hi @jeremy,
I want do deploy Image Classification on Android using Pytorch Android API.
It requires .pt file, but using learn.save() i am getting only .pth file.
So how can i convert .pth file to .pt

This should be helpful: https://pytorch.org/tutorials/beginner/saving_loading_models.html