Created a Youtube thumbnail (video cover photo) click-thru rate predictor!
Try this model on Hugging Face Space
This is a Resnet model that was fine-tunned by a dataset from my wife’s Youtube channel
I have deployed the model to a custom website which is more accessible for my wife. I used GitHub page to host it and the hugging face embedding method to add the model because the API method mentioned in the course is not valid now.
I have added two same modules to the website so my wife can upload and compare two candidate cover photos for her next Youtube video. The model works well based on the data from her recent videos!
Thanks Jeremy for this wonderful course!
First post here. Just restarted my DL journey a few days ago. (I’ve tried several times in the past but would always hit a roadblock. Tried a few books but could never get past initial few chapters.) Liking fastai’s approach so far and I’m excited to hopefully the stay the course this time around.
I really love pizza and thought it’d be cool to build out a pizza classifier. I trained it on images of NY and Detroit style pizza. I used the resnet34 as my pretrained model. I noticed some interesting cases where I’d feed it a picture of a whole detroit style pizza, it would predict its a ny style pizza. The opposite did not occur at least from the sample images I tried. It would do better on cases where the image of a Detroit style piza was being lifted up and you could see the crust of the pizza. I downloaded the images via fastai download_images helper function.
Who else loves pizza?
Hi, I want to make a face recognition system that can recognize faces of around 200 people with good speed and accuracy. When I googled around I found that ArcFace is a good option. But I don’t know how to use ArcFace and what are all the preprocessing steps involved in preparing the dataset. Also I saw that MTCNN is the in built face detection model in ArcFace. Should I try using another face detection model or sould I just use the default MTCNN. Kindly help me. I did google around but did not know what to do. If you could just give me an outline of ArcFace and how do I use it for real-time application it would be wonderful. Thank you in advance.
Hey there, I’m Jack, a web developer who has decided to dive headfirst into the fascinating world of deep learning.
I used to think machine learning was as mysterious as deciphering ancient hieroglyphs, due to the myth that you need a Master’s degree in statistics or something close.
Thanks to the wonderful teacher, Jeremy Howard, I now approach deep learning with curiosity and not fear.
Back in the day, my parents were potato farmers, dealing with lackluster yields because they didn’t have an agronomist to guide them. So, fueled by curiosity (and a bit of nostalgia for those potato days), I decided to test whether deep learning could be used to detect plant diseases.
I trained a model to detect early blight and late blight, two common potato diseases. I got the dataset from Kaggle.
And guess what? it predicted the 2 common potato diseases with >97% accuracy!
Here is the notebook. I would appreciate your review and feedback
Potato diseases notebook
As recommended in the course, I wrote a blog for the above project using GitHub Page.
Hi there, I’m Navier, a fullstack engineer, now getting into AI because of the exciting possibilities.
I appreciate that the course jumps straight to the code.
I didn’t have many ideas, so on lesson 1 I started by building a classifier for different types of art: Notebook.
I made a small change to your kaggle notes and made a Costco vs Walmart receipt classifier (here). It is amazing how little code was needed. I still have no idea what the magic is, so I’ll follow the next videos to learn more.
Thath’s my first post here and I’m excited to share my ML project, a Disney Princess Recognizer. As a data analyst with a PhD in Mathematics, I recently ventured into the world of machine learning through the fastai course aiming to become ML engineer.
Inspired by my two-year-old daughter’s love for Disney Princesses, I created a model that identifies these characters in images. It’s been a hit at home!
I’ve deployed it on Hugging Face via Gradio. Test it out here: Disney Princess Recognizer.
- Hugging Face Spaces
@lsikora nice work! well-organized post!
@Batian Love the inspiring story mate! It’s a very practical and useful model! But the kaggle notebook link cannot be used. Would love to check it.
Hi @Chuhao ,
Sorry for the broken link. Here is the link to the notebook:
@Batian Thanks, just checked it out. Interesting to see all the potato leaves.
I just made a simple notebook to classify pet facial expressions using a dataset I found on kaggle here. If anybody is interested or sees anything I could have done better please let me know!
Hey, just finished the part 1 video and done a project
Dollar or Rupee identifier
the project is a Dollar or Indian rupee currency identifier, and it had a very high accuracy. Felt really excited after completing this and motivated me to explore more.
I was very new to the deep learning field and this session really inpsired me to learn more about this field and create some good projects.
Rishikesh K V
This is my first post on this forum - I just completed the first lesson of Practical Deep Learning and I’ve been tinkering with a few notebooks. Here’s the brief project description at the top of my project notebook
My project aims to analyze a soundclip of a melodic interval played on guitar and determine whether the notes were played as a hammer-on or not. This is the first step in what I hope will be a larger, more useful project. I want to expand it, getting it to identify a variety of legato techniques on guitar, e.g. slides, pull-offs, tapping.
Long-term, I’d like to build a model that can take a song as input, parse it into small soundclips, and analyze the soundclips for hammer-ons, pull-offs, slides, and other articulations. This goal will probably change as I learn more, but it’s a start!
I teach guitar for a living, and I constantly need to search through enormous libraries of guitar music to find songs that both appeal to the particular student I’m instructing and that emphasize certain techniques they are learning. Hence, this project.
The idea was inspired by the example project by Ethan Sutin in fast.ai’s fastbook chapter one under section Image Recognizers Can Tackle Non-Image Tasks.
I’m not sure yet if I’m on the right track with this project, but it’s been fun to build, and I’m getting more comfortable with Jupyter and the fastai library!
Able to differentiate between car:red_car: and bike
First comment here. As a part of the first lesson, I made a hen VS rooster model.
This is my first post on this forum and I just completed Lesson 1. As a part of this lesson, I built an emu vs ostrich classifier.
I kept things simple and made an Is it a Tiger classifier
It checks images of cats and Tigers.