Share your work here ✅

Show us what you’ve made using the things you’re learning in the course!

I love seeing stuff made by folks new to deep learning, so please share your work even if you’re a beginner. Sharing your work helps inspire others on their deep learning journey!

Here are two great ways to share your work, both free and based on Jupyter Notebooks:

  • fastpages, which lets you create a blog using markdown and jupyter
  • Kaggle Code, just like you saw me use in the lessons – create your own notebook, then click “Share” and set it to “Public”. You’ll get a URL you can share here.
46 Likes

Last night I attended the first session of the fast.ai course being hosted live at the University of Queensland. This morning I trained a model that does a surprisingly decent job of differentiating between damaged and undamaged cars within a couple of hours. It took a few iterations to find search strings specific enough to return images that could be used to fine tune a model with a reasonable error rate.

My first attempt using “photos of normal traffic” and “photos of traffic accidents” was too generic.

It feels a little uncomfortable to run so much code that I don’t fully understand yet but I am taking Jeremy’s advice and building first.

Here’s my car damage classifier notebook!
The performance is highly dependent on the set of images that are downloaded with each search string. When I re-ran the notebook from the shared public link above the ‘photo of a damaged car’ returned a Mercedes driving through a flooded road. Not exactly a damaged car, unless you include water damage.

One question I have is house-keeping of the image files on kaggle. I was creating folders and paths to the image files. How do you delete the folders you don’t want anymore?


40 Likes

For those new to the course and perhaps unawares, I built a library called ohmeow-blurr based primarily on things I’ve learned from the course and book. It’s a fastai developer first library for developers who want to use fastai to train Hugging Face transformers.

It also illustrates something Jeremy mentioned in the first class …

If you look at the code:

  1. You won’t see a lot of complicated math.
  2. It was built on subsets of open source datasets available via fastai and Hugging Face.
  3. Everything has been trained on a 4 year old DL rig using a 1080Ti GPU

For those interested in giving it a go, I created a blurrified version of Jeremy’s “Iterate like a grandmaster” notebook shared on the Lesson 1 official topic thread.

22 Likes

It’s weird because its academically atypical, and yet, as I think about how I’ve learned to build or do anything … this is how it starts. In fact I would argue that your insights about getting quality data for your task are a result of “just running the code.”

By week 3 or 4, you’ll know quite a bit about what is going on underneath it all :slight_smile:

9 Likes

It feels like forever since I was last looking into the fast.ai library. So, I’m resorting to start over my understanding from scratch, building a multi class classification model.

I also saw some Huggingface spaces demos a while ago, and thought that was pretty neat. Earlier this week when looking into it, I found out that it was definitely possible to use fast.ai models on huggingface spaces via gradio, so I figured why not try that.

So, what I’ve done is built a Food image classifier (Food-101 dataset) and hosted it as a live interactive demo on HF spaces. I find it more fun to be able to play around with the model with real images.

Live demo at : Food Image Classifier (Food-101|ResNet50|fast.ai) - a Hugging Face Space by suvash

Let me know what you think. If there’s enough interest, I can share the notebook as well. It’s very similar to the classifying breeds tutorial on fastai docs.

Overall, this was fun to build. :raised_hands:

EDIT : I’ve now also added the notebooks used for training (and testing out gradio inference) in the same HF repo (notebooks folder) if anybody wants to take a look at it.

27 Likes

Feel free to join the fastai organization on HF and upload your work there :slight_smile:

10 Likes

Wow, this is really cool and mouth watering! Thanks for the inspiration @suvash

1 Like

This is my beginner post about converting images into tensors and back. This is my way to document my study in a searchable environment (Fastpages) mainly for me but maybe for others too. I’m exploring the fastAi ToTensor class, then torchvisions’s ToTensor class, and lastly just passing image(s) into a tensor method (I guess). The last part is about reverting a tensor back to an image.
Convert image files into a tensor and back with FastAi, PIL, Torchvision and Vanilla Pytorch

6 Likes

Thanks for sharing this Suvash! Do you have a write up somewhere on how to share models on HF spaces using gradio?

Thanks!

1 Like

Thanks for asking :raised_hands: I haven’t written about it yet, maybe something I can work on the coming days once I understand the libs better. In the meantime, I think @ilovescience has already written a pretty good post on the topic that I followed through.
It’s available at : Gradio + HuggingFace Spaces: A Tutorial | Tanishq Abraham’s blog

The thing about HF spaces (that i’ve learnt so far) is that all the code+models+etc. is contained in the git repo for the project, it builds/runs the gradio app via some internally defined Dockerfile, proxies it to another url & then embeds that url into the main page via iframes.

So, in this case all the things needed for the demo is available at https://huggingface.co/spaces/suvash/food-101-resnet50/tree/main, app.py being the entrypoint for the gradio project that HF infra eventually runs after building it ( proxying it at https://hf.space/embed/suvash/food-101-resnet50/+ ) and then injects that url via an iframe to the HF space page. Pretty neat actually !

I’m still curious about the operational aspect of spaces(for eg. when is the container(process) spawn/shutdown/restarted, processes per space, build-deploy flow seems really quick etc.), but I’m getting a bit carried away and that’s for some another discussion thread. :sweat_smile:

10 Likes

I’ve also added the notebooks to the same HF repo now, if anybody wants to go through it.

2 Likes

I ended up creating “The Amazing Beard Detector” by changing two letters in the search term (from ‘bird’ to ‘beard’) :sweat_smile:

I still don’t quite understand how it’s cleaning up any potential bad images, but I think that’s what the doc function is for :wink:

Next step is to somehow figure out how I can create an app from this notebook using Gradio and HF Spaces.

29 Likes

Lion Cat Classifier:

14 Likes

Hey everyone, I thought I would share my project from when I took the 2020 course. It’s probably much larger than it should have been, but anyway here is a project on classifying 1000 species of mushrooms from over 200k images :slight_smile: I think this might be the largest mushroom classifier around…?

It might give some insight into preparing data for a model, training models, and using callbacks (commands that trigger during training to do something helpful, e.g. save the model weights).

Let me know if you have any questions!

23 Likes

Inspired by the @suvash food classifier :yum:, I wanted to build a classifier that identifies Marvel characters.

To make it more interesting :star_struck:, I deployed the Gradio app on the Jarvislabs.ai CPU instance. You can play with the demo here.

Hope the model finds your favorite marvel character right.

21 Likes

I’m not good with Python but after some tweaking held by duck tape I made Rock Paper Scissors where you always win and the computer will always lose.

26 Likes

I’m please to present my Kaggle Code (Which is a copy of Jeremy )
Elon has been in the news a lot lately
I created a model to see if a photo is Elon Musk or not
iu

Enjoy!!

18 Likes

To make the Bird v Forest a little trickier I used much the same code as Jeremy had but used 4 classes of birds that can be hard to tell apart. Bald Eagles, Juvenile Bald Eagles, Golden Eagles and Osprey. One lesson learned - an Osprey is also a “tiltrotor military aircraft” - so searching for ‘Osprey Bird’ gets a better result. Anyway, with 300 pictures of each, and just 15 seconds of training (5 epochs of 0.03) on my WSL2 Ubuntu running on my Win11 Surface Book 3, it identified this picture from my lunchtime walk today. I thought 83% was pretty good (and correct!)

. I’ve been through earlier fastai courses and so excited to see so much of the hard work is hidden away, allowing more focus on the problem - and of course the data. My image set could do with some cleaning…

Hit me up for any Win11, Surface or WSL2 questions - full disclosure - I work for Microsoft.

19 Likes

I bit off a bit more than I could chew and tried to train on aerial view photos of cities to predict their mean temperatures. I thought this was going to be a bit of stretch but it turns out it works ok. I sourced data from https://en.climate-data.org/ for the mean annual temperatures of 195 capital cities and trained on 3 aerial view photos of each of these cities.

I got a reasonable validation loss after a few attempts. Then I tried to predict the mean temperature of Brisbane (which was not in the training set because it is not a national capital).

While Brisbane climate: Average Temperature, weather by month, Brisbane weather averages - Climate-Data.org claims the mean annual temperature of Brisbane is 20.0 deg C, which I think is a great result!

26 Likes

Hi Fastai Friends,

After the first lesson this week I took Jeremy’s suggestion and tried my hand at actually building a model and finishing something practical before the next lesson.

I humbly present FastClouds

This is not meant to be a serious project and is just for my own learning experience.

The Problem

On ground observations are a key part to weather forecasting. Most observations are taken by autonomous systems but there are still a few routine observations that are done manually by a human. One of these is cloud type classification.

This manual observation is currently done is at major airports around Australia. At these airports one or more highly knowledgeable and accredited aerodrome weather observers is stationed to take manual weather observations on a fixed schedule throughout each day. But, having such specialized observers at all airports all of the time is not cost effective or realistically feasible, especially for remote locations (e.g. uninhabited islands or infrequently used aerodromes). Therefore many of these remote or small areas miss out on observations and perhaps receive lower quality situational awareness and forecasts as a result.

The Solution

Using deep learning and image classification techniques to classifying cloud types from photographs seemed to me a very plausible solution to this problem. Therefore, after the Fastai course v5 lecture 1 I thought I’d try to do exactly that using a visual learner example Jeremy provided as my starting point.

This algorithm uses a resnet and transfer learning as per the original notebook - [is-it-a-bird] -(Is it a bird? Creating a model from your own data | Kaggle) but it uses three broad categories of clouds instead of just birds vs forests. These classes were chosen as per the work of Luke Howard in “Essay of the Modifications of Clouds” (1803) (NWS JetStream - The Four Core Types of Clouds).

In order to create a data set duckduckgo was searched for the terms: cirrus clouds, cumulus clouds, stratus clouds

example_cloud_batch

So, here it is for you to enjoy - FastClouds | Kaggle

I’d love ideas, feedback, and suggestions should anyone have any.

Thanks

44 Likes