Share your work here ✅

Hi everyone, Anyone checked their older model in huggingface.
My image detection model worked till last week, it seems it is not working anymore ?
Does new gradio version came out recently ?

Actually, it is the same library we’ve talked about before. They just show how to visualize heatmaps alongside bounding boxes.

Oh God! This was so damn unbelievable :heart_eyes:!

So I tried to build a model after Lesson 1 that could classify the 7 continents based on their map images. I was very sceptic about this (I don’t know why?!).

Turns out our model can not only do it, but do it highly accurately :smiley:.

Just look at this beauty!












North America:


South America:


Here is the Colab NB.


Try pinning the torch version in requirements to 1.12.1. (1.13 broke a few things)

torch == 1.12.1

Hi. I had that issue recently. Did you try ‘Restart this Space’ ? Worked for me. Think it stops periodically due to inactivity.

1 Like

I have made the following videos and working on it uploading them every week.

Click the link Here

Have a look!

1 Like

Was revising Chapter 2 so have made a Gradio interface for it now:


hi, I just sent my request to join fastai organization in HF. Looking forward to share and learn :blush:

1 Like

Oh yeah thanks for sharing.


Just built a breastcancer classifier from the second lecture. BreastCancerUsingFastAI | Kaggle

and built a huggingface app on it

although it performed well on training and validation data but poor performance on googled photos. Gotta improve, suggest me how to improve perfomance on this types of data

1 Like

Hi, I learnt machine learning two years ago and was trying to learn deep learning through multiple courses until I stumbled upon Fast AI 6 months back. Bought the book and read through the first 5 chapters and also listened to the first 4 videos (including Course 0). I am now able to make progress in my deep learning journey. Took Jeremy’s advice and created a deep learning image classifier using the CIFAR dataset. Tried a couple of different neural networks and epochs. Achieved an error rate of 20% with resnet34 and 24% with resnet18.



For some reason ImageClassifierCleaner was not working in the AWS Sagemaker. Are there any known reasons for this.

1 Like

Hello everyone, I have tried this course before a while back but never finished it. I am hoping to finish it this time.
This idea has to do with bowel movements so if you are squeamish, you could probably skip this one. However I do need some help refining this so if you’re okay with the idea of stool classification and could help me out, I’d appreciate it.
I recently had this idea for an app that would have an api that you could post photos of your poop to and it would classify it on one of the 7 categories of stool Bristol Stool Chart | Faecal | Continence Foundation of Australia. I tried using duck duck go to search bristol stool type 1. it comes up with a chart of all the poops. so the classifier seemed to not know because each example had all 7 types of stool on it. I heard Jeremy say that you don’t need lots of data to train this model since we’re using transfer learning. so I manually photoshopped out one of the charts and added the data in buckets to the kaggle notebook posted below. I set the valid_pct=0.01 and kept the seed of 42 since I will need the answer to the universe for this one. when I iterate 10 times, it seems to think the validation image which is a type 1 stool is type 1 with ~48% surety. (I’m still tweaking this to see if I can get it higher but with such little data, I may just be overfitting…)
I need help finding better photos of each class of stool. I tried searching for the source data that led to the creation of the bristol stool chart but didn’t come up with anything.
I think I may have found a source for photos, rate my poo seems to have photos of stool that I can manually classify. maybe if I get 10 of each it will perform better.
I have a roughly working notebook here: Bristol stool chart classifier, first go at it. | Kaggle


Hi all,

After going through the first lesson, I built my first app: Is it Korean Soup or Vietnamese Soup?

It’s cold here in New England and I do like Asian soups, so I thought, “Is it actually possible to distinguish between Korean and Vietnamese?” The model seems to be pretty good at figuring it out.

You can find my Kaggle notebook for generating the model here. It is essentially the same as the “Is it A Bird?” notebook.

Also, I wrote an article about my experience at Medium. Mainly I wanted to document the obstacles I ran into (mostly VSCode/Python environment related). Any feedback is appreciated.



Hey all. Just finished Chapter 2 and I’m having a blast! Here’s one of the things I’ve been playing around with:

And I wrote up a post about it here.


Hi all, I made a prototype pseudoscience / fake news detector and wordcloud generator for websites using Fastai text transfer learning:

Here’s the Huggingface spaces demo: Pseudometer - a Hugging Face Space by sbavery
Here’s the Github repo (using nbdev): GitHub - sbavery/pseudometer: Pseudoscience detector using machine learning

Please let me know if you have feedback or would like to collaborate! I’ve so far found this is a challenging problem to define and solve.


Hey, I made a little flower classifier based of the pets notebook as my parent’s got a bouquet for their anniversary and I wanted to know what flowers were there.


I played Pokemon blue when I was a kid so i stive to be the very best and catch them all. Because of this childhood value, I’m trying to train the very best stool image classifier, I’m still trying to gather enough data for my bristol stool chart classifier and to save time, I wrote a quick little python script to grab photos from a gentleman’s blog who posts high quality photos of his bowel movements each day (I contacted him through email and explained what I’m doing. He said it’s fine to use his photos for this classifier and just to remember him when I’m famous.). I have a time.sleep(1) in there so blogspot doesn’t think I’m a bot and so far it’s working. I’ll post my script in here but it got me thinking. This is something you have to do a bunch for data science does fastai have an image/webscraping utility? I also wonder if I describe what I want to github copilot if it would know of more stool blogs…
I got 1047 images from running the for loop 50 times. so it worked.

import requests
import re
import time
url = ""
r = requests.get(url)
pics = re.findall('(*\.png)', r.text)
for pic in pics:
    print('outside more fist pic', pic)
    res = requests.get(pic)
    with open(pic.split('/')[-1], 'wb') as wf:
more = re.findall("blog-pager-older-link\' href=\'(*by-date)", r.text)
for i in range(10):
    r = requests.get(more[0]+'=true')
    pics = re.findall('(*\.png)', r.text)
    for pic in pics:
        print('inside more fist pic',
        res = requests.get(pic)
        with open(pic.split('/')[-1], 'wb') as wf:
    more = re.findall("blog-pager-older-link\' href=\'(*by-date)", r.text)
1 Like

Distinguishing Which Rock Band among The Beatles, Led Zeppelin, Pink Floyd, The Rolling Stones, Aerosmith. ?

This is the model I made. (Link to Kaggle).This works terribly, and I expected it! An opportunity to learn something new. I picked five rock bands and trained the model to distinguish them. I knew that it was impossible to gain accuracy with the simplest model. First, they may appear in many different clothes, with unlimited background scenes, different eras, and many more variations.
And to make it even more challenging, this is a sample of the images downloaded for training the model!!

Pretty hopeless! Isn’t it? At least half of these six pictures have nothing to do with the actual bands. The remaining three are also very difficult to find a feature that the model can depend on. Therefore, my first question would be:

:question: What’s the suggested approach to make a model that can work? would it be first a segment training model to train on individual members of each band, then if finds a majority of those faces in a picture, to distinguish as that band? Or are advanced methods available that I am not aware of?

The results of the test were as follows:

mostly wrong, but then the other question I have is:

:question: why returned which_band does not correspond with the highest probability in probs. In defining the search terms I have used this order: searches = 'The Beatles', 'Led Zeppelin', 'Pink Floyd', 'The Rolling Stones', 'Aerosmith'. Therefore, I expect that First: returned tensor of probabilities represent:
[p(‘The Beatles’), p(‘Led Zeppelin’), p(‘Pink Floyd’), p(‘The Rolling Stones’), p(‘Aerosmith’)]
Second: That which_band picks the corresponding name with highest probability in probs. What is it that I am missing?

Thank you for your help. I hope this challenge will be helpful for others, too.

1 Like

Experiment with…


pred, ndx, probs = learn.predict(...)
print(f'This is a {pred}')
print(f'Probability: {probs[ndx]}')
1 Like

One thing that may possibly help the model be more accurate and this is just a thought. I don’t know if it will actually help is to add duck duck go searches of individual band members to the directories of the bands. like Mic Jagger to the rolling stones dir and other members to each band directory. Hope this helps but if not may be worth the experiment.