I am facing issue when I am using lr_find() in colab. It gets interrupted while running on the third epoch. It looks something like in the image. And due to this I am unable to plot losses as well.
@Meghana_G After lr_find() you need to use learn.recorder.plot() instead of learn.recorder.plot_losses() Your learn.lr_find() can be interrupted if loss is constantly increasing. There is no point in checking more with higher learning rates if losses is already getting worse. Once it is done, you can use learn.recorder.plot() to see how it performed.
Check https://docs.fast.ai/basic_train.html#Recorder.plot and https://docs.fast.ai/basic_train.html#Recorder.plot_losses for the difference between plot and plot_losses
Hi, we have published all of the source including the model training notebook, model weights, inference server, and javascript face tracking.
For more info check it out:
@gokool describes how he trained the model on the large AffectNet dataset. @lauren describes how she made a high performance fastai inference server, and I describe how we increased performance by cropping the faces client-side.
As I feed the app more images (and let my friends play with it), it seems like what I would really want to do with images from "the wild” is to first focus in on the area of the picture that has a glass, then crop to that part of the image, and classify that cropped part. Do folks have any advice or examples for how to tackle such a step-by-step process like that (or a similar one)?
Which is quite amazing for 93 mushroom classes and a noisy dataset; when looking at the wikipedia of the 15 a lot are actually not distinguishable by picture only.
Update: next to that I discovered a lot of rare subtypes get confused because dataset noise due to googling the images automatically. E.g. when looking for the ‘hairy X’ it also finds images for the ‘shaggy X’ and the ‘yellow X’.
Would be great to include this info in the app (‘probably it’s X, but it’s often confused with [the poisoned] Y’). Or to check what’s the probability someone using the app is getting ill because of a wrong prediction …
I made a classifier that differentiates between three Indian dishes: Khakra, Papad and Parantha.
This will identify whether the food is papad, khakra or parantha. These are common Indian dishes that are round, similar in colour but differ in thickness and taste.
The accuracy is 87.5%, which I think is alright given that it misclassified 4 images and trained on only 91 images. I am having some issues with deploying the webapp. Will add that here when its done. The notebook is here. PS: It’s written specifically for colab.
Hi all, I made a classifier for 22 different Persian dishes, using around 80 images from Google for each class. I deployed it using this guide. Thanks to @jeremy, @simonw and @navjots, It was straightforward for some like me without any background.
Here is the App: https://persiandish.now.sh/
Like @visingh, I attempted to classify architectural styles. In contrast to that effort, though, I used images retrieved from a web search using the Bing API. After obtaining the images and cleaning up the file names, the code pretty well mirrors the notebook template provided from the lesson. Accuracy ended up in the high-80%s, if I remember correctly, without much tuning. As an educational exercise, I’m pleased with it; as a real model, it would be wholly inadequate. Check out the gist here if you want.
This is great! It seems not easy to teach a machine how to tell when a person is happy or not. I’ve been working on something similar. I want to classify images of people when they are sad, happy, or angry and the best accuracy I could achieve so far is only about 68%
Butterflies are very beautiful. I’ve often wanted to know the name of butterflies that I observe in the wild. In my home country of Malaysia there are over 1000 identified species of butterfly, each with its own distinct features and coloration. Making a classifier to help with conservation and casual identification seems like worthy project to set myself to.
To start with, I made a classifier that identifies a butterfly by its family-rank. This also seems like a good case to benchmark the fine-grain classification capabilities of the resnet34 model, as the visual difference in features between butterfly families are extremely small.
These are the notable visual features that distinguish each of six families:
Swallowtails (Family Papilionidae): Notable for having tail-like appendages at the end of the wings
Brush-footed Butterflies (Family Nymphalidae) : The largest family of butterflies, called brush-footed for having tiny forelegs that are used as tasting appendages.
Skippers (Family Hesperiidae): Should be the easiest to differentiate, skippers have a robust thorax similar to a moth, and antennae that end with a hook.
I acquired ~200 images of examples of each family from Google Images using the wonderful little javascript tool written by @melonkernal to exclude irrelevant images and collect the image urls for my dataset.
After initial training this is the plot for the learning rate:
Here is training after unfreezing with the confusion matrix:
The accuracy I got for the final model was 83% which seems pretty good for a fine-grained problem, but I think I overdid it on the number of epochs. The error rate is going up and down, which I think is a sign that it’s overfitting.
Improve the data quality. Many of the most confused images were dirty, I should look into more ways of cleaning the data to improve accuracy.
Automate the building of my dataset. I’d like to be able to scale up to 1000 classes if I take it further and attempt species-rank classification.
Fewer epochs. With transfer learning, we already have pre-built weights that work pretty well. I didn’t really improve the accuracy very much after running too many epochs.
I’d love to hear feedback!
It’s my first time getting into fast.ai but I feel like I’ve learnt a huge amount from just working on a single problem. Thanks @jeremy@rachel for making this course available for us!
I got most of the data using search engines and listing sites. I did also use VMMRdb (which I believe sourced its data from Craigslist), but since it’s US data I only extracted the data relevant to Australia.
I used resnet50 and created an image classifier that tells you what type of martial arts someone is potentially practicing. I used the following types:
I agree; trying it in practice if it doesn’t get it right it often gets very close in a subjective sense.
I’ll look at rolling up models into series for e.g. Mercedes and see how the accuracy works. I didn’t know about the Image Relabeller, that’s fantastic!
Hello everyone!
I previously worked on the alligator vs crocodile app. Here on top of which @Lankinen wrote an amazing post and article Even though it was working fine but I was not satisfied with the testing results. So I decided to make version2 of the app (again focusing on Deep learning part. I am not a UI person). So I collected around 4k images (2k crocodile and 2k alligators) and trained my model with close to 97% accuracy. After that I deployed it on Heroku. You can find the app in action here. Interesting that this problem of classifying them is harder than I expected.
What is interesting is, I put my both models weights. V1 for previous weights and epoch 36 and epoch 72 for V2 model. I ran my model on all test images and saved the results in database provided by the Heroku itself. I saved images and now at the end of each day, I am resetting counts and fine-tuning model. (Someone asked the exact same question in today’s class). I am planning to do that weekly instead of daily. I have a medium post write-up Still a draft
This is a really learning experience for me. I deployed a Heroku app with heroku database in a docker container and trained 2 different models. Learned a lot.