Share your work here ✅

Wow, that’s super impressive! Congrats on your win!

2 Likes

Great job! :smiley:

2 Likes

I created a fruit multi-classifier after reading chapter 6 of the fastbook.

I have also written blogs on how I did it.
Part 1 explains how I trained it, and this Part 2 shows how I deployed it on Hugging Face.

3 Likes

I’m on chapter 4 of the book and enjoying it so far. Thanks @jeremy for providing such great materials and platforms for people who want to enter into Deep Learning world!

I have created an Indoor Plant Classifier that can identify 250 different indoor plants. I trained it with 25000 images using the methods described in the first two chapter of the book.

Give that a try with plants around you and let me know if it performs good enough!

4 Likes

Hi Jeremy I’ve created a Car or Bike Classifier here is the link - https://www.kaggle.com/gbiamgaurav/is-it-a-car-or-bike

But getting an Error. Can you please help

2 Likes

It’s an issue with the latest version, simple work around is here

1 Like

Hello there !
This is a work done for the Lesson 2. Of course multiple parts of the code used to build the model, for gradio and HF are copy/paste from the notebooks of the course. But as it’s said in the video, well, for now no need to understand everything, let’s build something to enjoy. (I should maybe try from scratch ?)

I spent the whole day on it, and after multiple issues (mamba/jupyter installation, HF spaces, etc haha), I created an architecture classifier.
For now it can recognize 8 building architectures based on a photo (greek, modern, gothic, byzantin…)
What is strange is that after 3 epoch the accuracy is around 70%, but most of the time it finds a prediction at 100% in HF. So maybe there were many difficult images in the training and validation sets :face_with_raised_eyebrow:

Anyway here is the link Architecture Classifier - a Hugging Face Space by nicolasca :slightly_smiling_face:

3 Likes

As it is encouraged here, I created a blog to write a blog post (even I’m a bit shy about it, I try to do the exercice).
In fact I didn’t write so much about the lesson 2, but more why it took me 5 years since I took the Andrew Ng course, to train and deploy my first model.

Deep Learning Journey : it took 5 years to finally train and deploy a model | nicolasca (created it with Hugo and Github Pages)

2 Likes

When you say it’s 100% a confident prediction, do you mean it’s a correct prediction? Or maybe is it 100% confidently wrong 30% of the times.

Cool to see architecture, it would be a challenge to get it at a better accuracy I think. As some types of architectures you selected share characteristics and the amount of types. You gave yourself a challenge.

3 Likes

@gmjn @nicolasca indeed, the 70% accuracy and the 100% confidence are two very different things.

  • accuracy is measured on the validation set during training, it shows the percentage of data in the validation set that it classified correctly as such.
  • confidence (as reported on HF during production / test) is related to the softmax outputs of the model on that one specific sample.

As @gmjn also mentioned: a model can be 100% confident of its predictions, but it can still be wrong

2 Likes

@gmjn @lucasvw Yes you are both right, I was not precise in my words.
The model is 100% confident (in all the images I have tested) with 70% accuracy. So when the model is wrong, he is sure about it ^^, which is not good.

I don’t think I will continue to work on it, but if I needed I’m not sure what I would do. Probably go to dig deeper in the data, and check the images with a wrong prediction and high confidence. Maybe I could find a pattern to clean some images to improve the model.

1 Like

My first-ever ML project is a simple adaptation of Is It a Bird? called the Marine Mammal Classifier AKA Is It an Otter?, seen here correctly identifying a California sea lion at the Santa Cruz Wharf near my home:

During training, I was only able to achieve 97% accuracy, whereas in a previous run, I was getting 100%. Not sure if this is due to different training images returned by DDG, or the stochastic part of SGD, or both.

3 Likes

Heyo! I made an Anakin Skywalker/Darth Vader image classifier by following the course on “Computer vision intro”.

Here’s my notebook:

2 Likes

I am genuinely happy to complete lesson 1 in day 2 of being here. Thanks to Jeremy and fast.ai people. here is my very first thing in deep learning. I just tweaked and changed a little bit of “is it a bird?” notebook.
https://www.kaggle.com/code/hadisajjadi/military-vs-commercial-plane

3 Likes

Hey everyone, for lesson 1 I made an image recognizer that could tell whether an image was of a creature from the game Elden Ring or an enemy from the game Ratchet and Clank. hope u like it :slight_smile:

4 Likes

Hey I made a simple “Toad or Frog” classifier based on the Lesson 1 notebook on bird classification. Since frogs and toads are quite similar and DuckDuckGo doesn’t provide a lot of results for the search queries, I applied flip and rotation transformations to train the model on different images for each epoch. Here’s my notebook: Toad or Frog? | Kaggle

4 Likes

German shephard and Wolf classifier

This is my first fastai / Pytorch based notebook,
Changed only a few lines of code by adding another example for prediction
Works actually quite well!!

Here’s the link for code: German shephard vs Wolf classifier | Kaggle

1 Like

Renaissance or Baroque paintings Classifier

Just finished the first tutorial, made this classifier since it looked fun.
Input renaissance image: https://www.history.com/.image/ar_16:9%2Cc_fill%2Ccs_srgb%2Cfl_progressive%2Cg_faces:center%2Cq_auto:good%2Cw_768/MTk1MTQzNjQ0MDU0MjM0ODUw/renaissance-gettyimages-1309914466.jpg
Uses resnet18


It works :smile:
had some troubles running it locally so i did it on Kaggle

3 Likes

Hey there,
I’ve created an AI classifier that can finally tell the difference between coriander and parsley! As someone who loves to cook, this is a helpful for me since I always used to mix up these two .
here it is to try it out ,

https://hosam123-coriandervsparsley.hf.space

The Fast.ai team’s contributions to the field of AI education have been invaluable, and I’m grateful for their work. thanks guys :smiley:

2 Likes

Hello :wave: . I wanted to update my progress on a project I have been working on for some time.

The project attempts to identify snakes up to the species level. You can play around with the demo here.

The project is challenging (in a good way) for me because of 2 things:

  • Snake identification itself is a challenging task - learn more
  • and two it is my first big project that involves a lot of data and categories(386,006 photographs of 772 snake species)

I took the advice of prototyping on smaller datasets and incrementally increasing them. Currently, I am at 50 categories and I am planning to scale up to the full dataset with all the techniques that are working on the smaller samples. So far I have top-3 accuracy of 92.9%, an accuracy of 80.2% and F1 score (macro) of 80.5% on the 50 categories. The runs and reports can be found on wandb

What is giving the best results so far is using Progressize resizing, and utilizing MixUp for the initial progressive resizing iterations where I can afford to train for more epochs (because the epochs take shorter times). I still have some things I want to experiment with as indicated by the TODO.md file in the repo like Self Supervised learning and maybe NLP at some point. I will iterate on the project as I continue doing the course even part 2 and finishing up on fastbook.

I also decided to include a LOGBOOK.md while doing this project that documents my thoughts and progress whenever I worked on the project.

In the demo, I also include a Wikipedia summary that might reveal some useful info when identifying the snake, eg if it is venomous and also the countries where the predicted species is most likely to be found.

My reasoning is as follows: since I have a top 3 accuracy of around 93%, when a user wants to predict an image of a snake, currently we can be 93% sure that the snake is among the first three predictions (if it was in the training set). From there, the user can use more info like countries and description to narrow down on the species eg. if the first prediction suggests the snake is mostly found in Australia and you were bitten in Canada, you might want to look at the second species predicted.

I will update my progress in this thread on the next iteration of the project :smiley:. Feel free to share any comments or feedback.

Project Repo - GitHub - jimmiemunyi/snake-species-identification: Identification of Snakes upto the species level

11 Likes