Share your work here ✅

For those willing to try their first Kaggle competition, or those with little experience like me, what @radek created on Kaggle and GitHub for this Quick_Draw competition is pure gold:
He takes you through the initial steps of downloading the right data,
how to process/transform it to fit into FastaiV1,
how/where to modify the tuning (image_size, full or partial dataset_size, number of epochs),
and even include the code to optimize it (TTA) then generate a submission :heart_eyes:

It’s truly an amazing package for an on-going competition.

Of course, his starter pack “as it is” won’t get you a medal.
But if you dive in and figure out how to play with FastaiV1 options/parameters, you have a unique chance to compete and see your Public Leaderboard score move as you explore (remember: only 5 submissions per day).

Plus he and others share more tips about finetuning to get a better score, in the Kaggle thread he created so it’s legit.

BTW, if you check his profile on Kaggle, he went from Novice (Rank 1) to Master (Rank 4) in the last 3 months, focusing on three Computer Vision competitions (1 solo gold, 2 silver) so he obviously knows a bit about it :rofl:

cc @jeremy

PS: this relates to the first post in this thread.

21 Likes

I classified 149 different composers based on spectrograms of their compositions. I wanted to see whether an image classifier could rival an expert music buff’s ability to spot a composer by listening to a piece. Data was a large MIDI file dump found on a DJ subreddit.

The model turned out only so-so (82% accuracy) and, really, it only works well for the best-represented composers in the dataset (it rocked at classifying Bach). But it was a fun proof of concept and intro to the fastai workflow!


code: https://github.com/zcaceres/deep-learning-composer-prediction

19 Likes

That sounds really high for recognizing that many classes!

2 Likes

Wow, cool! 82% is amazing, almost unbelievable accuracy with so many classes!

You might also want to try with a melspectogram. It was better for environmental sounds, not sure if that holds for music though.

1 Like

Is this that beautiful spectro form provided by librosa??

Yes, one of them:
https://librosa.github.io/librosa/generated/librosa.feature.melspectrogram.html

Librosa is really great, and you can generate several types of spectrograms all with a lot of options. To be honest I was just pretty much blindly experimenting because I’m not an expert in audio. I think selecting the optimal one with the optimal configuration requires some insight into the specific use case.

5 Likes

That’s useful notebook, will check out

I’m trying to build a classification model for various aircraft models (Airbus A380, Boeing 747, Boeing 777 etc.) - the problem I’m facing is that my lateral ends of the image get cropped off due to square-cropping, and this impacts my training because in most cases the nose and tail of the airplane, which are important features that distinguish the model get cropped off. How do I overcome this?

You can add these arguments to the ImageDataBunch function: do_crop=False, padding_mode=‘zeros’

See https://docs.fast.ai/vision.transform.html for alternative padding_modes and more info on this.

3 Likes

I took the nice animation of SGD from lesson 2 and tried to take it a step further, plotting the loss function and the path followed by SGD.

Some visualizations at different learning rates:





It was a nice learning experience and an opportunity to learn a little bit more on matplotlib as well :slight_smile:

144 Likes

If you read here https://www.fast.ai/2018/08/10/fastai-diu-imagenet/ there is a discussion on rectangles. I did not have time to check if this thing has been already implemented in fastai v1

very cool :+1:

Link is broken.

Link is OK for me.

Grate job!

1 Like

I tried classifying all the 81 fruits on the same dataset. Almost 100% accuracy on validation set, even without training the lower layers. Maybe the fruit shapes are very basic, and so such deep NN is not required.

1 Like

Interesting observation, I think it is worth a shot to test it and experiment how model does with grayscale images.

1 Like

Hi Everyone!

A little more on my satellite project. (a lot actually :slight_smile: )

First - you can try it in a nice-ish webapp: yourcityfrom.space

I’ve played around with a couple production and serving pipelines and ended up serving everything the old-fashioned way from a digital ocean droplet. The code and a little more about the backend and frontend are in the github repo

More interesting is the fact that when confronted with examples outside the initial dataset (which was split in training/test), the perceived accuracy is much less than the 85% accuracy on the validation set.

I think that’s very representative of trying to use DL in the real world, where you collect data, split train/val, and then get so-so results when model is in production.

Anyways, I spent a bit of time trying to figure out what was going on. It comes from the data collection methodology. To make a long story short, the 4000 images in the original dataset were sometimes “too similar”; essentially patches that were next to each other geographically ended up getting split into train and validation set, which boosted the validation accuracy significantly, but didn’t generalize to images collected using a different method.

I thought that was a super interesting teaching and am looking forward to seeing if I can fix this by improving my data collection methodology! Will keep you posted.

35 Likes

Hello,
I have created a very basic web app (with as less HTML as possible and I hope to improve UI) that classifies Alligator vs Crocodile (Apparently they are more similar than I thought) with a total of 400 images. Please give it a try. I am also planning to train a model on more images. :slight_smile:

Thanks for trying.
EDIT: I have removed the link and moving my app to more stable solution like Azure Webapp or Heroku! Also I am working on the v2 (training on more data) Thanks everyone for your support.
EDIT2: Please find the updated app post here.

3 Likes

Does it sting? (Inspired bij Jeremy’s teddybears)

Hooverflies are well kown for their mimicry. Hooverflies protect themself by looking a lot like wasps and bees, but …they do not sting!

It would be very helpfull to know if a wasp like insect will sting or not :-)…so I had a go and downloaded a few hundred pictures of wasps, bees and hooverflies from google

Using the “four lines of code approach” of lesson 2, the model reached over 95% accuracy

image

It even recognised this as a bee:

and this as a hooverfly:

image

(Both not part of training or validationset)

DL is fun!

Ad

5 Likes

this is sick! thanks for sharing :slight_smile:

2 Likes