Share your work here ✅

I bit off a bit more than I could chew and tried to train on aerial view photos of cities to predict their mean temperatures. I thought this was going to be a bit of stretch but it turns out it works ok. I sourced data from https://en.climate-data.org/ for the mean annual temperatures of 195 capital cities and trained on 3 aerial view photos of each of these cities.

I got a reasonable validation loss after a few attempts. Then I tried to predict the mean temperature of Brisbane (which was not in the training set because it is not a national capital).

While Brisbane climate: Average Temperature, weather by month, Brisbane weather averages - Climate-Data.org claims the mean annual temperature of Brisbane is 20.0 deg C, which I think is a great result!

26 Likes

Hi Fastai Friends,

After the first lesson this week I took Jeremy’s suggestion and tried my hand at actually building a model and finishing something practical before the next lesson.

I humbly present FastClouds

This is not meant to be a serious project and is just for my own learning experience.

The Problem

On ground observations are a key part to weather forecasting. Most observations are taken by autonomous systems but there are still a few routine observations that are done manually by a human. One of these is cloud type classification.

This manual observation is currently done is at major airports around Australia. At these airports one or more highly knowledgeable and accredited aerodrome weather observers is stationed to take manual weather observations on a fixed schedule throughout each day. But, having such specialized observers at all airports all of the time is not cost effective or realistically feasible, especially for remote locations (e.g. uninhabited islands or infrequently used aerodromes). Therefore many of these remote or small areas miss out on observations and perhaps receive lower quality situational awareness and forecasts as a result.

The Solution

Using deep learning and image classification techniques to classifying cloud types from photographs seemed to me a very plausible solution to this problem. Therefore, after the Fastai course v5 lecture 1 I thought I’d try to do exactly that using a visual learner example Jeremy provided as my starting point.

This algorithm uses a resnet and transfer learning as per the original notebook - [is-it-a-bird] -(Is it a bird? Creating a model from your own data | Kaggle) but it uses three broad categories of clouds instead of just birds vs forests. These classes were chosen as per the work of Luke Howard in “Essay of the Modifications of Clouds” (1803) (NWS JetStream - The Four Core Types of Clouds).

In order to create a data set duckduckgo was searched for the terms: cirrus clouds, cumulus clouds, stratus clouds

example_cloud_batch

So, here it is for you to enjoy - FastClouds | Kaggle

I’d love ideas, feedback, and suggestions should anyone have any.

Thanks

44 Likes

Nice notebook and nice writeup … you got an upvote for me!

1 Like

I am not from a computer science background so won’t be anything flashy!!
I didn’t have much time to explore extra things today but tried with dog and muffin.


Question: What will the code look like if I want to specify a muffin photo that looks like a dog to test the machine? (ie. upload or use a weblink to a photo)

Thanks!

12 Likes

Cool work - to test with a specific image you could find an image you like on the internet then just use the download_url method you already used to download it to your kaggle hosted directory. You would then use this image as the image you feed to the learn.predict() method to see how your model does with that image.

You can also create an actual “upload” button - chapter 1 of the book shows an example.

6 Likes

Hi everyone,

Though I’m not new to Deep Learning, I haven’t used the fastai library for a while. (And for a while means that I still remember the discussions about S4TF…) So I decided that the new iteration of this great course is a good opportunity to try it out again and see how it works after having experience with various other Deep Learning frameworks.

For this purpose, I decided to adapt the first lesson’s example to a slightly new task – instead of classification, I go with images segmentation. I wanted to see if one can easily learn how to use the library for a different kind of data and created a small Kaggle kernel where a segmentation dataset and a U-Net model is used. And it worked! I spent a couple of hours or so to get the first result. Of course, I also had to slightly preprocess the input data and copy-pasted some of my old snippets here and there. But otherwise, it was a quite smooth start.

Here is a sample of the data overlaid with segmentation masks:

And here you can see some predictions:

It seems that I have some issues with the data (see the samples with strange white blobs). Also, I tried a tiny model using only a small subset of samples. But everything else looks reasonable. So I definitely recommend you to check out the docs to see various examples of how to use the library. You’ll probably be able to easily adapt it for your use cases (depending on how complex your task is, of course); especially, if you have some Python coding experience.

28 Likes

Vishnu! As a Marvel fan, I love the demo.

But I’m concerned about adversarial attacks from DC fans like @muellerzr. We both know that it is the secret hope over every DC fan, Zach included, to see their heroes welcomed into the Marvel fold … some may even go so far as attempting to mis-use your classifier to convince others that they indeed have.

For example …

DC fans would love to convince the unaware that this is an image of Black Panther, and I felt like I needed to add something here to keep this from happening.

So I give you and all other concerned citizens the “Is it a Marvel Character?” application. Aimed to be used as a precursor to your much needed, " Identify your favourite marvel character", this model can help folks predict the probability a character is even of Marvel stock before attempting to classify it as such.

:rocket: Hugging Face Space (Gradio) Demo
Built this following Tanishq’s and Suvash’s excellent work and write-ups.

:rocket: Kaggle notebook
Shows folks how to turn Jeremy’s “Is it a Bird?” notebook from a classification task into a regression task with merely 3 changes.


In addition from saving folks like Zach and other DC fans from themselves, this notebook is meant to provide folks here with an example of building a model where the predicted class could be None or Unsure. Three changes is it all takes friends.

There is plenty of room for improvement, but it’s a start. As Cap says, “Whatever it takes!”

23 Likes

Wow, I just loved it :star_struck::star_struck::star_struck:.

2 Likes

Today I learned there are still DC fans.

And that one walks amongst us…

20 Likes

ouch and lol!

He’s still young so I’m hopeful despite the propaganda he posts on Twitter.

3 Likes

Liked and Upvoted.

Great kernel and great write up! Nice to see some of the other ML tasks mentioned in lesson 1 getting some nice demos like this from the community!

2 Likes

I tried to make an apple variety image classifier, which does reasonably well (top is actual variety, bottom is predicted).

It turns out that telling apart apples form their pictures is quite hard, but a neural network can do better than I can. It gets about 60% validation error, and does reasonably well on the test set above.

Kaggle notebook

19 Likes

I wonder how good an apple expert would be from pics like this…

1 Like

Friends

Following Jeremy’s bird / forest classifier example (and the great work / guides of @ilovescience, @wgpubs and @suvash) I present The Glock 9mm Classifier! hosted on HF Spaces via Gradio.

Glock produces a range of pistols in various calibers as well as “profiles” for concealment: standard, compact and sub-compact. As Glock has a common design philosophy for all its models, these pistols can be difficult to distinguish to the untrained eye. For this reason, I decided to see if a fine-tuned Resnet152 model would be able to pick up the (quite fine-grained) distinctions between these pistol types.

For example – a Glock 17:

Glock 17

and a Glock 19:

Glock 19

Accuracy was approximately 75% without any real fine-tuning customisation.

For fun (and to also ensure I wasn’t pulling anything from the training set!) I also pulled screenshots from YouTube clips of people holding and using the various pistol types and was very impressed the model could still accurately pick the type in the vast majority of cases – even when the whole pistol was not visible.

For example, this screenshot from YouTube shows only the front 1/3 of the pistol slide – but the model was able to pick up the model number (19) which is present on all Glock slides.

Another interesting example was where the pistol was being held – and was still accurately classified:

Note: this is only trained on Glock 17, 19 and 26 types (the main 9mm model types). You can, of course, supply an image of any other pistol make. interestingly, the model will identify some common features and do its best to provide a prediction.

For example, here is a supplied image of a Beretta Mod 92 pistol. What’s interesting here is that the grip of the pistol is being substantially obscured by the hand of the person holding the pistol – as the Glock 26 is a sub-compact model and has a comparatively short grip, our model’s highest probability is that the pistol is a Glock 26!

The submitted image:

and a Glock 26:

Glock 26

Enjoy. Thoughts / feedback / comments welcome!

11 Likes

Thanks for sharing this. I’ve been thinking of trying out segmentation tasks, so this is perfect. :raised_hands:

1 Like

This is brilliant. Loved the notebook ! :beers:

Wow, this classifier is definitely already beating me by a huge extent. :apple: :green_apple:

3 Likes

In my case, you’re definitely speaking to right audience. :sweat_smile:

1 Like

I build a ML model to classify music and to identify their genre using fastai on top of Kaggle Music Classification Dataset. The accuracy I got after 10 epochs was approx. 55%.

I wrote my experience on training this model in ML-blog. Do check it out :slight_smile:

20 Likes