I’ve learnt a lot in this course and I am really excited for the last lecture and the kind of things you can do with U-Net. I would suggest anyone doing this course to stick to it. It is easy to keep switching courses, but fast.ai is the real deal.
That looks awesome! I couldn’t find the notebook you used to train your model but it’s usually good to plot_top_losses and plot_confusion_matrix when things aren’t as you expect. You might find that some pictures are mislabeled or the learner is looking in the wrong place or 2 particular types of penguins are similar. You seemed to have cleaned your data well, though. You could try and adjust your transforms: larger images, no warping, and lower contrast, but you’ll just have to try things and see how well it works.
Thanks! Yes I should probably put my notebook on my github page, I haven’t done that yet.
I have been using the plot_confusion_matrix, and I know that some of the penguin species do look alike, but it is also getting species confused that have quite different features. I am currently trying some experiments with changing the transforms to see what helps.
I also wrote at length about WHISP, the ESC-50 dataset, training an environmental sound classifier, and some insights i had along the way while testing it in the field on my blog: http://aquiet.life/
Please let me know what you think! I’d love to get connected to other people using ML/AI in the sound/audio field
Hi @dipam7,
I’ve take a look at you post: It’s not a good idea to use default image data augmentation on spectrograms (especially big rotations and vertical translations):
Hey, thank you for the feedback I’ll change that, I knew it but maybe I did not incorporate it while coding it. I was actually having trouble with generating spectograms on Kaggle. If I generated more than 500 output files I couldn’t commit my kernel. Switching to colab now. Yes I’ve referred to the thread as well. Thanks again.
I made a classifier that recognizes images among 3 sports: rugby, wrestling and American Football.
To make the challenge interesting, I purposefully took images in the tackling motion in each of the three sports. Learned couple of things: a) CNNs are awesome. Got 90% prediction success rate within first try. b) errors were due to bad data, meaning when searching for rugby, some images of American football were also picked up by Google Images. So the algorithm correctly identified as football but since it was labeled rugby to begin with, it was counted as incorrect
wrote a medium blog post about it… looking to explore another use case using different sets of images.
I created a trash/recycling classifier and got the validation accuracy up to 94% … it’s not that good on random uploaded data but it’s pretty good. Next steps are to get more labelled data and experiment with augmentation of existing pics with background noise.
For week 4 I wanted to have a go at some NLP. I have a fairly long whatsapp chat history with my wife so I decided to try and build a language model with it…
My notebook is here. It can be used to process and build a model from any whatsapp chat history. I took a very simple approach, just strung all the messages together for each day (~1000 total) and didn’t label by person. Despite the small amount of data (it runs very quickly), the text generated by the model was pretty good!
thank you for sharing the details. Congrats on your bronze medal. For multi GPU , where you using the distributed model (to_distributed). ?. Can you please share some details on the same. thanks
Hey guys, I recently started writing for a new publication. My first article for them is about CNNs and Heatmaps. Fastai has really been a launchpad for my deep learning journey. Waiting for part 2
Starting to play after lesson 1 & 2 with an image classifier for ants.
Starting small as time is limited… by focusing on Lasius Niger, Messor Barbarus and Pheidole Pallidula.
I achieved 85% detection with a resnet50 without much finetuning for now.
Not extraordinary but very interesting
I just finished the second post in my mini-series about automatic captcha solving. In the first part, I used a multi-label classification approach to show that CNN’s can be used to solve captchas.
In this new part, I’m using single-character classification to turn captcha solving into a standard classification task. The heatmap function in plot_top_losses produced some very nice visuals. In fact it was so useful, I consider moving the heatmap generation into it’s own function. That way we could use it to explain the models decision for arbitrary inputs.