Share your work here ✅

My first huggingface application detects if you smile or not.
Works much better on close up portraits.
:slight_smile: or :frowning:

1 Like

I made an application that will identify trees from pictures of their leaves :evergreen_tree:
So far, it is limited to European trees listed in the European Atlas.
I scrapped the list of tree names and then built a dataset from photographs collected with DuckDuckGo.

It works quite well!

Tree Classifier

2 Likes

Just completed lesson 1: Getting started, super informative and really look forward to progressing! I changed up our model and posted it to my blog here
struggled a bit getting the blog set-up and still need to clean up the boilerplate, but was cool and straightforward. Look forward to lesson 2 today!

1 Like

Hello everyone!

I made a model to classify things as Huggable or Not?

You can play with it on the website or on HuggingFace Space.

I fine-tuned the pre-trained model (resnet34) on images of multiple examples for both categories, like for huggable, pillow photos, and for not huggable, images of cactus.

And it works surprisingly well for how much data it’s given, just 4 examples for each category with 50 images for each example so about 400 photos in total.

The notebook for the model is here.

Prediction Examples-
plushie
chainsaw
bed
fire
knife
snowman
towel

And I was able to create this model with the website just after 2 lectures of the course. Awesome!

I’d love ideas, feedback, and suggestions should anyone have any.

Thanks

5 Likes

Just finished building v1 of my Spreadsheet image classifier with DL. It takes images from Google Sheets or Rows.com and classifies the image based on the spreadsheet software it shows.

Notes:

  • Small training set: 31 combined images from Sheets and Rows.com
  • Little code: The fastai library is great

Here’s the link to the public Kaggle notebook

2 Likes

When training the “dog vs cat” detector in Lesson 2, I decided to throw it a curveball and show it a picture of Puppycat (from Bee and Puppycat) and was very excited by the result.

After that, like Gagan I decided to make a “Bird, Plane, Superman” detector which ended up working pretty well. I had to make sure to specify “flying” though to get more comparable images.

Finally, I had a failed attempt at a tool to detect poison ivy/oak. I trained it on images of those two and common mimics, but the error rate was quite high and the model took a long time to train. The images were problematic, since some were up close, some far away and details were not always visible. I tried to only retrieve images of the leafs, but error rates were still high. In terms of training time, I don’t care what mimic it is, I only want to know if it is a mimic, so may look for a way to collapse the mimics into a single “not poison ivy” category. I will keep working on it since all do often I end up staring at a plant wondering what I stepped in.

3 Likes

After seeing some of the amazing images produced with Dalle 2, but not having access to it myself, I was very excited when stability.ai released the Stable Diffusion model a few days ago. Experiencing that feeling of being ‘left out’ inspired me to help make the Stable Diffusion model even more accessible so I decided to create a Twitter bot to make image generation with Stable Diffusion only a tweet away. All you need to do is to create a Tweet and @ mention @diffusionbot ( link) in your tweet and the bot will use the tweet as the prompt to generate 4 images which it will include in a reply to your tweet. Please give it a try!

Here are a few of my favorite’s that have been generated so far…

Or even better, you can look at all of the images by scrolling through the media tab on the bot’s profile page…

https://twitter.com/diffusionbot/media

Alternatively if you want to try it out on Hugging Face Spaces you can use this link… (warning - the server seems pretty overwhelmed right now so you’ll have to be patient)

Or you can access the code and getting started info on GitHub.

I built the bot using a combination of code from the repo as well as from the Hugging Face Spaces app and hooked it up to Twitter using the tweepy library.

Up next I’m planning on adding the ability to control the hyper-parameters via the tweet as well as exploring the possibility of adding image to image capabilities using the recently released script. If you have any questions, comments or suggestions on how I can make this better please let me know!






10 Likes

Nice work @matdmiller! I’m also really excited about stable diffusion and impressed with everything that stability.ai is building for the open source community.

1 Like

Thank you Jeremy and team for creating this wonderful course. I am a complete newbie and yet following your video and book was not very difficult. For my first project, I’ve tried classifying an image between a frontal car crash vs rollover car crash. I started a bit more ambitious with a few more categories but on inspecting the downloaded images, realized the images were not very different between classes. Had to strip it down to just two categories. Even within these two categories some of the images are poorly labeled. After watching lesson 2 of the course, tried the ImageClassifierCleaner function to unlink some images.

Overall, I think the model can do much better. I was only able to achieve an error rate of 10%. This was after the image cleanup and also using a deeper model (resnet50). Any suggestions on what else I could do will be very useful. Here is my notebook.

Thanks again for such an interesting class. Really enjoying it.

2 Likes

I am immensely enjoying the book and the course, much thanks to everyone involved!

After completing lesson 1, I built a classifier to differentiate between spaceships from Star Wars versus spaceships from Star Trek.

As part of the learning process, I tried explain the elements of the code in my own words and wrote a short blog post about it.

3 Likes

Great post - thanks for sharing!

1 Like

For people interested in the RSNA 2022 competition on kaggle my notebook which extracts all the DICOM metadata might be intersting. With fastai it was super easy to read them into a dataframe (though it took muuuch time). Feel free to use my data to safe yourself some time :wink:

2 Likes

Some data analysis with fastai for the current RSNA challenge on kaggle. Largely inspired by an old notebook of Jeremy.

I redid the exercise and had a much better experience. Here is my watch classifier (7 categories). Works quite well I thought.
(Azure Bing search, about 150 images for each watch category, resnet18, 10 epochs.)

4 Likes

As part of “Create an image recognition model using data you curate, and deploy it on the web” I created a classifier to figure out whether an image belongs to Autumn (my 5 year old rat terrier) or another random rat-terrier. I noticed that the classifier fails to classify correctly in cases where I would struggle if I did not know the right label :slight_smile:

here is the hugging face app: IsAutumn - a Hugging Face Space by Ersin. only 3 out of 6 sample images are Autumn’s but classifier thinks the first 5 are Autumn. (The first three are Autumn’s images)

2 Likes

To work on the concepts from lesson 1 and 2 I built this ML classification model that takes an image of a spreadsheet and classifies it if it is rows, excel, sheets or numbers.

It is not perfect (more training data will make it better), but it is ok enough.

You can access it and play around with it here.

Demogif

2 Likes

Hello Everyone, My name is Vikas Awasthi. I completed lesson 1 of the course and I am really happy after making my first project. I trained my model to differentiate between “Indian Sculpture” and "Egyptian Sculpture.
Some of the Examples are of Indian and Egyptian Sculptures-

Testing for this image.,
image

Testing for another image…
image

Well! Feeling really great because it is predicting accurately. :smiley:

But I do find some challenge in understanding code because I do not have much experience in coding.
But after searching in Google, I came close to understand the code. So, all that experience I shared in my blog. So, anybody who is like me, having little knowledge of code can get some understanding of the code.
Link of the Blog-

Link of my Project Kaggle notebook-

5 Likes

Hey,
since you’re beginner like me, how did you upload images in your notebook, when I did my notebook appear in my repo but not in the blog?
when I open your lesson on Google Colab I see that all images you upload in was in that format: ![image.png](data:image/png;base64,iVBORw0K...)
are these images uploaded in the repo or somewhere else?
thanks

Hi Smail,
I did not upload images in the repo. Actually, what I did was - I just copied and pasted the images in my notebook. So, whatever article or documentation I want to share, first I took the screenshot of that and copied it and pasted it in the markdown cell of my notebook. That worked for me. I think that will work for you. Thanks.

1 Like

I will try your method.
Thanks a lot Vikas for your time,
Keep learning.

1 Like