Share your work here ✅

Is this that beautiful spectro form provided by librosa??

Yes, one of them:
https://librosa.github.io/librosa/generated/librosa.feature.melspectrogram.html

Librosa is really great, and you can generate several types of spectrograms all with a lot of options. To be honest I was just pretty much blindly experimenting because I’m not an expert in audio. I think selecting the optimal one with the optimal configuration requires some insight into the specific use case.

5 Likes

That’s useful notebook, will check out

I’m trying to build a classification model for various aircraft models (Airbus A380, Boeing 747, Boeing 777 etc.) - the problem I’m facing is that my lateral ends of the image get cropped off due to square-cropping, and this impacts my training because in most cases the nose and tail of the airplane, which are important features that distinguish the model get cropped off. How do I overcome this?

You can add these arguments to the ImageDataBunch function: do_crop=False, padding_mode=‘zeros’

See https://docs.fast.ai/vision.transform.html for alternative padding_modes and more info on this.

3 Likes

I took the nice animation of SGD from lesson 2 and tried to take it a step further, plotting the loss function and the path followed by SGD.

Some visualizations at different learning rates:





It was a nice learning experience and an opportunity to learn a little bit more on matplotlib as well :slight_smile:

144 Likes

If you read here https://www.fast.ai/2018/08/10/fastai-diu-imagenet/ there is a discussion on rectangles. I did not have time to check if this thing has been already implemented in fastai v1

very cool :+1:

Link is broken.

Link is OK for me.

Grate job!

1 Like

I tried classifying all the 81 fruits on the same dataset. Almost 100% accuracy on validation set, even without training the lower layers. Maybe the fruit shapes are very basic, and so such deep NN is not required.

1 Like

Interesting observation, I think it is worth a shot to test it and experiment how model does with grayscale images.

1 Like

Hi Everyone!

A little more on my satellite project. (a lot actually :slight_smile: )

First - you can try it in a nice-ish webapp: yourcityfrom.space

I’ve played around with a couple production and serving pipelines and ended up serving everything the old-fashioned way from a digital ocean droplet. The code and a little more about the backend and frontend are in the github repo

More interesting is the fact that when confronted with examples outside the initial dataset (which was split in training/test), the perceived accuracy is much less than the 85% accuracy on the validation set.

I think that’s very representative of trying to use DL in the real world, where you collect data, split train/val, and then get so-so results when model is in production.

Anyways, I spent a bit of time trying to figure out what was going on. It comes from the data collection methodology. To make a long story short, the 4000 images in the original dataset were sometimes “too similar”; essentially patches that were next to each other geographically ended up getting split into train and validation set, which boosted the validation accuracy significantly, but didn’t generalize to images collected using a different method.

I thought that was a super interesting teaching and am looking forward to seeing if I can fix this by improving my data collection methodology! Will keep you posted.

35 Likes

Hello,
I have created a very basic web app (with as less HTML as possible and I hope to improve UI) that classifies Alligator vs Crocodile (Apparently they are more similar than I thought) with a total of 400 images. Please give it a try. I am also planning to train a model on more images. :slight_smile:

Thanks for trying.
EDIT: I have removed the link and moving my app to more stable solution like Azure Webapp or Heroku! Also I am working on the v2 (training on more data) Thanks everyone for your support.
EDIT2: Please find the updated app post here.

3 Likes

Does it sting? (Inspired bij Jeremy’s teddybears)

Hooverflies are well kown for their mimicry. Hooverflies protect themself by looking a lot like wasps and bees, but …they do not sting!

It would be very helpfull to know if a wasp like insect will sting or not :-)…so I had a go and downloaded a few hundred pictures of wasps, bees and hooverflies from google

Using the “four lines of code approach” of lesson 2, the model reached over 95% accuracy

image

It even recognised this as a bee:

and this as a hooverfly:

image

(Both not part of training or validationset)

DL is fun!

Ad

5 Likes

this is sick! thanks for sharing :slight_smile:

2 Likes

This is wonderful! :slight_smile:

2 Likes

Great discovery. Here’s some useful info on that topic - hope it helps:

https://www.fast.ai/2017/11/13/validation-sets/

3 Likes

Just wanted to share my small contribution to the library. When I’ve tried to classify larger dataset (80k images, 340 classes) I got a memory-related error caused by underlying pytorch function call. I’ve introduced a new parameter that allows to overcome this obstacle: https://docs.fast.ai/vision.learner.html#Working-with-large-dataset

8 Likes

Hi everyone,
Instead of doing the Lesson2 homework-which was trying web deployment of a model, we- a few members of the Fast.ai Asia Virtual study group are trying building a mobile app (everything to run on the phone) : “Another Not Hotdog” app, but using PyTorch.

I’ve introduced the complete plan in a blogpost here.

@cedric has already created the framework needed for majority of the steps needed for step 2,3 mentioned in the blogpost.

You can find the Fast.ai camera repository Here.
Here’s a little video demo:

I’ll be thankful for any feedback.

Best Regards,
Sanyam

32 Likes