Share your work here ✅

I’ve posted some more articles on medium:

Sound classification using spectograms

Momentum, Adam’s optimizer and more

Dropout and an overview of other regularization techniques

I’ve learnt a lot in this course and I am really excited for the last lecture and the kind of things you can do with U-Net. I would suggest anyone doing this course to stick to it. It is easy to keep switching courses, but fast.ai is the real deal.

1 Like

That looks awesome! I couldn’t find the notebook you used to train your model but it’s usually good to plot_top_losses and plot_confusion_matrix when things aren’t as you expect. You might find that some pictures are mislabeled or the learner is looking in the wrong place or 2 particular types of penguins are similar. You seemed to have cleaned your data well, though. You could try and adjust your transforms: larger images, no warping, and lower contrast, but you’ll just have to try things and see how well it works.

1 Like

Thanks! Yes I should probably put my notebook on my github page, I haven’t done that yet.

I have been using the plot_confusion_matrix, and I know that some of the penguin species do look alike, but it is also getting species confused that have quite different features. I am currently trying some experiments with changing the transforms to see what helps.

Thanks, I will definitely try mixed precision training

1 Like

Hello!

I made an environmental sound classifer called WHISP that classifies sounds from 50 categories, trained on the ESC-50 dataset.

You can try it out here! https://whisp.onrender.com/

The code is up here on Github: https://github.com/aquietlife/whisp

I also wrote at length about WHISP, the ESC-50 dataset, training an environmental sound classifier, and some insights i had along the way while testing it in the field on my blog: http://aquiet.life/

Please let me know what you think! I’d love to get connected to other people using ML/AI in the sound/audio field :slightly_smiling_face:

4 Likes

Hi @dipam7,
I’ve take a look at you post: It’s not a good idea to use default image data augmentation on spectrograms (especially big rotations and vertical translations):

If you want some suggestion on sound data augmentation on see: Deep Learning with Audio Thread .

5 Likes

Hey, thank you for the feedback :slight_smile: I’ll change that, I knew it but maybe I did not incorporate it while coding it. I was actually having trouble with generating spectograms on Kaggle. If I generated more than 500 output files I couldn’t commit my kernel. Switching to colab now. Yes I’ve referred to the thread as well. Thanks again.

I made a classifier that recognizes images among 3 sports: rugby, wrestling and American Football.

To make the challenge interesting, I purposefully took images in the tackling motion in each of the three sports. Learned couple of things: a) CNNs are awesome. Got 90% prediction success rate within first try. b) errors were due to bad data, meaning when searching for rugby, some images of American football were also picked up by Google Images. So the algorithm correctly identified as football but since it was labeled rugby to begin with, it was counted as incorrect :rofl:

wrote a medium blog post about it… looking to explore another use case using different sets of images.

https://medium.com/@parth.bme/deep-learning-with-fast-ai-engine-makes-it-speed-learning-52ab8564d184

1 Like

I got 74th place, top 2%, using fast.ai tabular on the LANL quake Kaggle challenge: https://www.kaggle.com/isaacshannon/isaac-fast-ai-evo

11 Likes

I created a trash/recycling classifier and got the validation accuracy up to 94% … it’s not that good on random uploaded data but it’s pretty good. Next steps are to get more labelled data and experiment with augmentation of existing pics with background noise.

https://fasttrash.onrender.com/

2 Likes

For week 4 I wanted to have a go at some NLP. I have a fairly long whatsapp chat history with my wife so I decided to try and build a language model with it…

My notebook is here. It can be used to process and build a model from any whatsapp chat history. I took a very simple approach, just strung all the messages together for each day (~1000 total) and didn’t label by person. Despite the small amount of data (it runs very quickly), the text generated by the model was pretty good!

3 Likes

I recently got a bronze medal in Google’s Landmark Recognition Kaggle Competition. Read all about it here!

7 Likes

thank you for sharing the details. Congrats on your bronze medal. For multi GPU , where you using the distributed model (to_distributed). ?. Can you please share some details on the same. thanks

Thanks, I will post the code tomorrow (doing a little training before I upload it).

Literally as easy as:

learn.model = torch.nn.DataParallel(learn.model)

what an article!! very nicely written!! Cant wait to see the codes now!! and congrats for your medal!! :slight_smile:

Hey guys, I recently started writing for a new publication. My first article for them is about CNNs and Heatmaps. Fastai has really been a launchpad for my deep learning journey. Waiting for part 2

Best,
Dipam

Starting to play after lesson 1 & 2 with an image classifier for ants.
Starting small as time is limited… by focusing on Lasius Niger, Messor Barbarus and Pheidole Pallidula.
I achieved 85% detection with a resnet50 without much finetuning for now.
Not extraordinary but very interesting :slight_smile:

1 Like

I just finished the second post in my mini-series about automatic captcha solving. In the first part, I used a multi-label classification approach to show that CNN’s can be used to solve captchas.

In this new part, I’m using single-character classification to turn captcha solving into a standard classification task. The heatmap function in plot_top_losses produced some very nice visuals. In fact it was so useful, I consider moving the heatmap generation into it’s own function. That way we could use it to explain the models decision for arbitrary inputs.

1 Like

I’m getting the same error but even num_workers ad padding_mode is not fixing it.

Sorry, this is all I thought of when I read your first sentence… =P

33cvf8

1 Like