Trained an image classifier to distinguish “brick”-cellphones from the 2000s and smartphones.
Works fairly well.
Noteably, its unsure with blackberries (learns the keyboards, i guess). But also with early iphone iterations. So the model is not only learning keyboard vs touchscreen as features but maybe also other variables depending on the era photo was taken in.
I’ve also build a second classifier to distinguish Midjourney and Stable Diffusion Images. Turns out, you can do this fairly well at a 15% error rate.
Following Lesson 5, I re-created Jeremy’s Linear model and neural net from scratch notebook with my own narration.
It follows the pattern that Jeremy’s does, but I wrote it to improve my understanding of creating a neural net and linear model, and it also includes some tips that absolute beginners should not miss.
Plus, it even contains a section that shows, by coding, why accuracy cannot be used as the loss function in a neural net.
For my first project, I wanted to do something practical for all the bird watchers in my region (I’m from Antioquia, Colombia). Therefore, I made an image recognizer for hummingbird species, an astonishing type of bird, the subject of admiration by many biologists, and admirers of nature.
For the model, I built a dataset of 2800 pictures, containing 50 images of each of the 56 species present in the region. The pictures were downloaded from the internet using ddg API as shown in the course.
I plan to add more species in the future, as well as include other forms of identification such as using the sound of the bird to identify it.
This is my first project, it is a simple apple detector, i used 30 images of apples in different settings, apple forest, apple trees and apples, and the other fruit was orange, which also was in different settings.
I used the code and made a photo classifier where it tries to classify if the given image is a photo or a painting. With very little tweaking, got some decent results:
Hello everyone I’ve just finished the second lesson.
I can’t thank you enough for this wonderful book.
I created my first classifier and I feel more encouraged to go deeper. Matt Damon Classifier - a Hugging Face Space by ifarg
Hi all,
Thank you so much for such an exciting learning endeavour! I just built my first ever fastai application as a solution to a uni assignment. It was a lot of fun and I can’t wait for the next one
After chapter 1, I tried to do the kaggle competition Natural Language Processing with disaster tweets and I got decent results. The code is not the best but just a modified version of the Is it a bird code.
Check out my new Notebook that helps you to go from a Deep Learning Model to Deployment
Notebook: Fastai to FastAPI+Railway | Kaggle
Deployment: FastAPI + Railway
Training: Fastai
(P.S. If you are The Office, there are some easter eggs🐣)
If you have every wondered ‘What on Earth is that cheese?, I present The Grand Cheese Oracle. . For my first project I built a basic cheese classifier, trained on 1200 pictures of 6 cheeses. I had to clean up images to remove brand names and figure out how to git sync with hugging face spaces (ended up using SSH keys). Overall a challenging and rewarding project from a beginner standpoint. Looking forward to the rest of the course. Cheesy - a Hugging Face Space by kakis2
Here is a cool bikes and tv/desktop monitor classifier (in case you forget which is which ) made using HuggingFace embeds. The website is a bit hackerish themed (and fairly responsive on mobile devices too!) inspired by Jeremy’s website. Learnt a good amount of CSS while doing this https://suchitg04.github.io/image_classifier_web/
Hello Everybody, Rubanza Silver here.
So after lesson 1 and 2, I built an image classifier to detect the type of antelope in an image. I was inspired to build this after taking a trip through the Savannah ( Katungulu road, Uganda), just found the wild to be fascinating hence I thought an antelope classifier would be perfect.
Data cleaning and augmentation was something fascinating to learn in lesson 2. The fact that we first train the model before cleaning the data was quite counter-intuitive but also made sense. I’ve put my understanding of data cleaning and augmentation in a blog post in a really funny way (wow, look at me laughing at my own jokes ). Check this out: AsquirousSpeaks - Data Cleaning and Augmentation
I successfully built a robust image classification model using Fastai that accurately identifies and classifies cars, bikes, airplanes, and ships. you can check out my Notebook and add more categories like a train etc.
Hi just started redoing the course, love how much the fastai library has progressed, it’s even easier to use now and the source code is very readable. See first exercise here:
Would need to clean up the data to make it better, a lot of video game images as I chose to append the word game to each label
Following Lesson 6’s How Random Forest really work notebook, I created a notebook of my own that entirely mimics that notebook, except the prose and some extra experimentation, and added an extra section that builds DecisionTrees from scratch and stores the info of each node in a dictionary.