Share your work here ✅

(hector) #65

@MagnIeeT and me are working on the audio dataset from kaggle competition and converted them into images using Fourier transform(FFT).
We performed multiple first cut experiments taking top 3, 7 more frequent classes.
We are getting ~84% accuracy on 3 classes.
image
Few images of top losses(each graph is FFT of audio clip)

image

The performance degrades with increasing number of classes
image
Next steps will be to change how we are doing fourier transform on audio images (the sampling frequency of audio file. Window size we selected was 2 seconds, need to adjust corresponding to notes frequency). Need to test this approach on bigger datasets as our data is currently very small. Also we are planning to use spectrogram as tried by other users.

Google audio dataset is another good source providing 10 sec audio snippets. We initially planned to use but parked it for later as it is more suitable multi label classification

12 Likes

Share you work here - highlights
(Suvash Thapaliya) #66

neat. would love to be able to go through the notebook.

1 Like

(Ramesh Kumar Singh) #67

@raghavab1992 It’s really interesting work you are doing. Could you please write a blog post about it and I guess if we get to see the notebook it will be really great resource to learn from.

1 Like

(Ethan Sutin) #68

Cool!

Did you use the 10 fold CV As described in the dataset notes? If not, your accuracy is misleading. Take a look at the dataset website. I had similar results, but it’s much more difficult doing the CV and if you don’t you can’t compare it to accuracy in the literature.

Would love to see your nb :slight_smile:

0 Likes

(hector) #69

@rameshsingh thanks :slightly_smiling_face: would summarize learnings from different experiments we are doing once completed thru a blog. Will share the notebook post some clean up as it is too dirty now for sharing in the forum

1 Like

(Jan) #70

With 39 zucchinis and 47 cucumbers …

a ResNet50 with input size 299 managed to perfectly distinguish between the two on the validation set (10% of the above numbers) after 2 epochs:

I know we must be careful when interpreting results on a validation set with 13 samples but I had a limited number images and wanted to share the results in any case. The fact that training loss >> validation loss is probably an indication that my validation set is not difficult enough.

6 Likes

(Andrew Chaffin) #71

I pulled data from Kaggle - https://www.kaggle.com/slothkong/10-monkey-species

10 species of monkeys with about 100 training images for each. I didn’t see any need to do the fine tuning section with this data because how do you get better than pretty much perfect right from the start? Amazing. Gonna find some other data and go again.

0 Likes

(Ad Postma) #72

Thanks for your reply Ethan. I will look into that.

1 Like

(Abhi Gupta) #73

Hello everyone, I found this dataset on kaggle using the google dataset search which is for classifying fruit photos. So I tried my hands and got to 0.5% error rate within 4 epochs. I used resnet34 as the architecture.
Here are the images

The only things my model is classifying wrong are same things with different labels. :smile:


1 Like

(Vitaliy Bondarenko) #74

Hi Radek, how do you approach memory problem with fastai for this competition? fastai learner loads all dataset into memory arrays, and this dataset is too huge to do it.

these are some experiments I did for previous fastai version: Experiments on using Redis as DataSet for fast.ai for huge datasets

2 Likes

#75

Hi Vitaliy - for this competition I load the data directly from HDD.

0 Likes

(Sparkle Russell-Puleri) #76

I wanted to represent for the Caribbean programmers. So I built a classifier to classify Trinidad & Tobago Masqueraders versus regular islanders.

Here is a sample of my dataset

Here is a sample of my predictions

Here is my confusion matrix

Pretty decent results for a very small dataset. Notebook will be forthcoming.

3 Likes

(Ilia) #77

I was thinking that data loader accepts paths and loads tensors on demand, no? Otherwise, it would be impossible to deal with any, even relatively small modern dataset. I remember that I had out-of-memory errors even when trained a dogs breeds classifier.

As I know, PyTorch datasets API doesn’t force you to load everything into memory at once. You only need to define how to retrieve a single instance based on its index.

0 Likes

(Vitaliy Bondarenko) #78

How? if you take all 50M images you will likely be out of memory on p2/p3.xlarge instance.

0 Likes

(Vitaliy Bondarenko) #79

I didn’t look into latest fastai, but the fastai from last year loaded all data into ArraysDataset in memory in learner.precomputed call.

0 Likes

#80

I’m not sure where, maybe it was with precompute=True, but in vision, fastai only loads the images a batch at a time when needed for training/validation.

2 Likes

(Neeraj Agrawal) #81

I think in this version of fastai, pre-compute option is removed.

0 Likes

(James Requa) #82

You can also use an s3 bucket to store the data on aws and use a library like boto3 to access the data from the bucket

1 Like

(Michal Wawrzyniuk) #83

Hi,

Small and simple spin off from lesson1
It was one of my last hackhathone task
to do recognition of road signs.
As we can see without big hassle I achieved 98% on very unique data set black and white data three classes
AR-arrow
LD - left diagonal
RD - right diagonal
I loaded data from CSV

Cheers

Michal

0 Likes

(Fadhli Ismail) #84

This is quite a nice project. I love the simplicity of it. Nice work.

2 Likes