Share your work here ✅

The page was private and I didn’t noticed, I turned it public, it should work now. Please let me know if it isn’t!

Btw thanks :)!

I ended up with human, gorilla, monkey detector. It’s a little bit fun because they are almost similar yet different. The optimisation result can’t reach 100% accurate but I found it pretty accurate on detecting gorilla lol

Hi Everyone!

I am sharing my initial work here with the Fast AI community. I want to say thanks to @jeremy, Rachel Thomas and the community for this great course plus all the material / advice in the different forums and links. I got so excited writing about AI, potato chips, acrylamide, and how to prevent health issues like cancer with this technology I ended up writing an entire blog post which you can see at https://www.clearedselect.com. Below is a easy to consume extract!

Summary of what I did: After running some of the example image classification models from the course I decided I wanted to create my own classifier. I chose to create a simple model that classifies Canadian potato chips, based on 300 pictures that I took myself of 3 different leading brands. Below is a sample of the code and images, and you can see the entire notebook at Canadian Potato chip classification | Kaggle

The results: The model did AMAZING well, amazing to me anyway, with a training set of only 300 images! I did not figure it would do that well! The model was able to successfully identify the individual brands and types, even from a distance! Full disclosure that I personally could not keep track of the plain potato chips associated to two different brands without the labels, but the model proved it could! And yeah this was not accident, I was trying my best to trick the model with similar images (sneaky I know).


Next steps for me: I am so excited with the results so far! So, for my capstone project I am going to build a model that will predict both the potato chip type and, for a chosen vendor and type of potato chip, the level of acrylamide in parts per billion, along with some easy-to-understand interpretation of the results. Why focus on acrylamide detection? Acrylamides are a suspected human carcinogen this is coming under increasing regulation by governments, so any advancement has the potential to positively impact others.

In the meantime, I was also thinking of posting a larger library of potato chip images in a competition to give back to the community, so others could have something fun to try out as well. If you think that is a good idea let me know directly or via the poll below! Or let me know if you think that is a dumb idea that I should not do! Just be nice please.

Feel free to reach out!

Poll about building and sharing a kaggle library of potato chips images for fun learning:

  • Yes - please build a library of potato chip images others can learn from! Sounds cool!
  • No - we have enough examples with birds, trees, bears, etc.

0 voters

5 Likes

Hi everyone!

After completing lesson 1, I’ve made a more generic version of the Is it a bird? notebook. The notebook prompts the user for two categories (e.g., ‘dog’ and ‘cat’), download images of these categories, and train a model on these images. Finally the notebook asks for the URL of an JPEG image and will predict to which category the image belongs.

3 Likes

Big Cats classifier

Dear FastAI friends:

After listening to the first lesson of the Deep Learning for Coders 2022 course, I took Jeremy’s suggestion and used the vision models available in FastAI to build a fun classifier that can differentiate between different big cats.

Problem

Using standard animal images retrieved from DuckDuckGo, we want to identify different big cats using a variety of Resnet models available at our disposal. The goal is to identify the following cat species with reasonable accuracy:

  • Cheetah
  • Jaguar (the animal, not the car in the training set)
  • Tiger
  • Cougar
  • Lion
  • African Leopard
  • Clouded Leopard
  • Snow Leopard

Solution

We will use all the resnet models available in FastAI vision libraries and train, then compare the accuracy of predictions for each label type. We will use the pre-trained model and fine-tune similar to the is-it-a-bird-or-plane model.

Results

After the training fine-tune is completed with three epochs, the resnet models are ready for inference. Here is the final error rate achieved with each of the models

image

Note: The usage of GPU reduced the training time by a factor of 10x - With CPU, Resnet18 model took about 6-7 minutes to train.

Here is the Colab notebook for the big cats classifier

In terms of accuracy, the results were quite comparable with the various resnet models with steady increase in accuracy - Here are the details from inference done on 154 test images.

image

What’s next?

I’m working on a galaxy classifier - This will be a galaxy morphology classifier that uses Galaxy Zoo human-encoded labels as input and utilizing the SDSS dataset, the model will attempt to classify a few types of galaxies. In my experiments so far, I don’t see the model converging - the training set reduces, but the validation set flat lines or sometimes increases. So, I’m still working on that and learning that system. I’d love to discuss this project with anyone interested, share ideas and get feedback and suggestions from this community

AJ

(Aspiring Astronomer)

1 Like

Hello everyone!
Lemons and pineapples have never been so high-tech! I just completed a machine learning project on classifying citrus (lemon) and non-citrus (pineapple) fruits using a convolutional neural network. With a diverse dataset of fruit images and labels, I was able to train the model to accurately distinguish between these two types of fruit. Check out the project on Kaggle at https://www.kaggle.com/glunkad/cirtus-and-non-cirtus-fruits and see how it performs on new, unseen fruit images. The possibilities are endless with this kind of technology – imagine a fruit sorting robot or a fruit identification app that helps you choose the ripest produce.

Hi all,
Thanks a lot, Jeremy, for the course.
I just finished Lesson 1, and here’s a Simplified Waste Classifier based on ‘Is it a bird’ code.
It has 3 classes, based on the rubbish bins used in Randwick, NSW, Australia.

1 Like

Update:

I’ve gone through Lesson 2 and create a HF space to play with various models.
You can find it here: HF Big Cats Classifier

I showed this app to my kid and promptly, similar to Jeremy’s story, I got the request to search for lioness to see if gender dimorphism throws off the model and it did! Second, we searched for a ‘liger’ and the model didn’t do well on that too - I expected to see 50% tiger and 50% lion but it predicted a leopard :slight_smile:

So, I’m retraining the model to take into account cubs, gender dimorphism.

One thing that did work well though was with a black panther image, which is a melanistic color variant of the leopard and the model managed to predict that correctly. But this was temperamental - it was incorrectly predicting the panther after more iterations on the fine_tune()

For the homework 1 I tried to compare portraits of two authors: one is Karl Bryullov and the other is Alexander Shilov. The reason is that I once read a post claiming that it’s near to impossible to mix their works, if one pays careful attention to the details - the shading of cheeks, costumes and backgrounds - although for an untrained eye they might seem similar. Well, turns out, a deep learning model from fast.ai has a trained eye :slight_smile: (tr = true, pr = predicted):

image

You can check the model out here

One problem I noticed though is that there are different shades of same picture in the set I downloaded from Internet, so some pictures might get a good score due to leaking. One solution is to manually remove duplicates, but I’ll see whether during the course I’ll think of or learn of something better :slight_smile:

Hello everyone! First off thank you to @jeremy and everyone else for this course! It seems incredible and I’m excited to go down this journey. I watched Lecture 1, and have been playing around with image classifiers.

I created a model for diagnosing and classifying Diabetic Retinopathy based on retinal fungus images. The images look like this:

I am using the Messiador dataset, which contains 1,200 images and labels 0-5, indicating severity.

Here is the notebook! I didn’t implement anything too fancy, but I would love any feedback or advice.

13 Likes

Very interesting @cdeangelis !

1 Like

Hey all! I just finished Lesson 3. I spent a lot of time really trying to understand and re-implement everything from the book myself. Along the way I honed my practice methods a bit, and I wrote up a short blog post about the process: https://carvermichael.github.io/2022/12/30/my-practice-method-for-fastai.html.

Very excited to continue on!

2 Likes

Hello everyone!

I deployed a simple model for the first time and wanted to share it!

It is a binary classifier that tries to predict if a mushroom is deadly or not. Due to the risky nature of this and because it wasn’t trained on much data right now the categories it predicts are 1.deadly and 2. maybe.

Don’t go eating random mushrooms just because a poorly trained model says it might be alright!

MaybeDeadly

The maybe category includes poisonous mushrooms as well, this is on purpose, it should predict deadly only on mushrooms that you are likely to die from if eaten.

I hope to improve on this early prototype quite a bit so if any of you play with it and notice any really bad outputs, especially False negatives please let me know.

4 Likes

Hi All,

I just completed lesson 1 of the course and created an ‘Asian Jim’ detector referring to this scene from the US version of the office using Jeremy’s bird or not notebook as a base. Let me know what you think!

1 Like

Hi All

I have built a currency classifier which classify currencies of six major central banks. You can view it here Tell me which currency it is.

I have two questions, please help in answering them:

  1. My error rate is around 35%, quite high, so I am not sure, if I should have really used resnet18 model, can someone help in deciding which model to use in which case?
  2. Can it be called fine tuning, when I have trained the model 20 or 50 times.

Please vote on the notebook too :grinning:
Thanks.

2 Likes

Hi everyone,

Sticking to Jeremy’s advice and diving right into building models.

In this Kaggle notebook, I built a classifier to identify curling houses (that is, curling the sport, popular in Canada) and regular houses.

Really enjoying the course so far and looking forward to continuing!

1 Like

The idea is good. But I am not able to see your NB.

Lovely idea. I am suprised about two things in your model:

  1. By how your model got better after running for more epochs. In my experience either it would not improve or even get worse with resnet18 when I run more epochs. So that’s pretty cool to see.

  2. Secondly, you were able to get it down to 20% which is not bad in your case. I tried running my models in similar cases where basically the images or the categories are not that neatly polished in terms of the differences.

Would love to hear what others say!

I deployed a classifier on a Hugging Face Space (Lizard or Rocks) following Tanishq’s excellent tutorial.

I learned that if you’re searching for “rock photos”, you’re likely to surface photos of Chris “The Rock” Johnson that will gum up your data. :smiley:

I also saw that this model can be fooled by lizards that are hard to see.

It might be interesting to train another one using some images of the type that you find with a search like “lizard camouflage”. I think a search like “lizard photos” (which was used to train this model) mostly finds images where the photographer wants to make sure you can see the lizard.

3 Likes

I also started this Quarto blog to track my progress :slight_smile:

2 Likes