Share your work here ✅

Poisonous plants are dangerous for unsuspecting individuals such as children. Especially the plants that are common in backyards. There are many casualties due to lack of supervision of gardens in public and private areas.
One solution to this problem is image classification using deep learning. Keeping this in mind I made a custom data set of the poisonous plants that are common in our home area. I got a list of these from here:


I have made the data set available on Kaggle:
https://www.kaggle.com/nitron/poisonous-plants-images
Using fastai, I have successfully created an Image classifier for classifying a plant’s image into 8 categories of poisonous plants. The classifier have 98% accuracy on the current data set. Here is the Kaggle kernel:
https://www.kaggle.com/nitron/poisonous-plant-classifier
This is one of the first step towards the solution. Next I will make a web app for demonstrating the capabilities of the current model.
Any helpful tips are welcome :slight_smile:

3 Likes

I just finished my first model and am really happy with it–recognizing keystrokes on a laptop using only the microphone.

Nice! Did you collect that data yourself? How well does it generalize? Does it matter where the microphone was compared to the keyboard? Does it work with multiple keyboards or just the one the sounds were recorded from?

:slight_smile: Thanks for your interest!
I did collect the data myself: 30 minutes of data, ~6000 keystrokes, soft and loud strikes, no background noise, single channel input, from built-in microphone on a single MacBook Pro, accuracy reported from validation set (20%). I would bet that the fixed distance from mic to each key was important in classification (but I am not certain–could be answered by mapping the confusion matrix to physical distance between keys).
I imagine potential opportunities for improvement to be: introduction of background noise, generalization to other laptops, complex keystrokes (e.g. cmd+c), event extraction from continuous input.

I’m happy to share it with anyone who wants a copy!

6 Likes

Identifying hockey team - sorry for the duplicate post on the main form.

Here’s my weekend(s) attempts at using fastai in identifying hockey teams based on their jerseys.

jerseyfier.com

The model was trained, and the project hosted on Floydhub who graciously donated some CPU cycles until I find a more permanent home. I’ve tried heroku but the slug size is a major limitation - so suggestions are welcomed.

PS: I’m personally a soccer fan - but soccer jerseys change quite frequently and would’ve been harder to train on.

1 Like

Hi Dipam7,

I too participated in this and got a score of 0.9557. I was in place 15 when I posted and now have moved down to 20. Try as much as I could, I am not able to get a better score than that. Lets compare notes…

Suresh

This is very interesting project! Did you have a link to your Kaggle kernel?

Hi Jona,

Interesting application! Can you share the confusion matrix? I am curious to know whether the model got confused with keys that are equidistant from the mic.

Suresh

I wanted to come up with a segmentation problem to practice the topics covered in Lesson 3. After a bit of thought I realized I had a dataset from work that could be adapted pretty easily.

I work for a company that makes games. We have about 5000 images of creatures in front of the egg they hatched out of that look like this:

easterdragonstubby

For a significant number (~3500) of them we also have a matching “Who’s that Pokemon” style image that looks like this:

easterdragonstubby

Unfortunately, we have lost the source files for many of the 1500 images that are missing the outlined version so it’s not easy to separate the creature from its egg (or to generate a “Who’s that Pokemon” style image with the creature and its egg in different colors).

Well, this looks a lot like a segmentation problem! If I can train a model to classify which parts of the image are “background”, “creature”, and “egg” then I can use the generated masks to extract the creature from the background image and to create “Who’s that Pokemon” style images for the part of the dataset that is missing them. In fact, the segmentation mask is a pretty close approximation for what the “Who’s that Pokemon” style image would look like!

I used the lesson3-camvid notebook as a jumping off point. And, after a full day of work, I’ve now got a model that correctly predicts about 97% of the pixels.

Example image input:

worldvaldenmark

Example output from the validation set (ground truth / model output):

You can see there are a few detail areas in the image that it doesn’t quite get right (whiskers, between the neck and the paper, around the base of the tail) but it looks pretty good in most cases!

There are also some for which it really doesn’t do well (I think it’s largely due to there not being enough similar training examples). Here’s one such example:

This particular image comes from a set of creatures that come out of balloons instead of eggs. Here was the input image the model was going off of (the drop shadow also made things a bit hard for it I think):

twistcat

The hardest part was getting my images to conform to the format that fastai was expecting. The original source images had a 4th channel (alpha transparency) and also had lots of transparent pixels that contained extraneous data in their RGB channels that was invisible due to the alpha channel. And the mask images had anti-aliasing and an alpha channel which had to be removed.

I had a lot of trouble making those be transformations that happened on the fly (independently transforming the x & y images doesn’t seem to be supported in fastai yet) so I ended up running the transforms up front and outputting them to disk before training my model.

I may try to put together a pull request that will make this process of adding independent custom transforms easier for people in the future (it looks like fastai v0.7 may have supported it with an is_y param passed to the transform function). But there’s a bit more to do than that because the masks are already being shrunk to a single channel prior to the transformations being run.

Edit: I now got a chance to try it on some of the images where we’re missing the “ground truth” to see how it does.

In some cases it does really well!

monster-truck

Unfortunately, in some of our older images, the input images aren’t styled the same way… turns out that one of the things our model learned is about the layer style applied to our images (the egg has a semitransparent white overlay over it in all of the training data). So on the old ones without this it predicts all “creature”

all-pet

2 Likes

That’s some really impressive work!

1 Like

Thanks @jeremy ! Already wrapping up part two! Thanks to your course, I finally stopped making excuses and writing stuff down. Now it seems to get addicting.

if you find a way to save your model on Kaggle then please tag me in the response as well. I’ve been suffering from the same issue as well and it’s pretty time consuming

Hi Suresh, did you try training with different sizes? (training with 128 and then using those weights for 256). I haven’t tried much hyperparameter tuning since I did not get the time (I am working as well.) Also, I haven’t made any notes as such, I refer to the official course notes only. We can stay in touch and keep each other informed about interesting projects we work on or hackathons we participate in. Cheers.

Hello Alex, a 99% accuracy rate on a data set that small is a classic sign of overfitting. You may want to explore dropout layers within the network along with increasing volume of content to increase diversification of course. I’m super interested in your work!

1 Like

Impressive work! And congrats on being featured by Jeremy himself!

Please see my response here (regarding your previous comment on saving models in Kaggle Kernels):

3 Likes

Please see my response here:


Hope this helps!

My apologies for forgetting this thread, I should have posted about my work here in the first place!

In order to avoid spamming the thread, I will post the link again to my original description.

In a sentence, I’m currently using Deep Learning models (mainly image-to-image translation) to predict, in real time, urban environmnetal performance. I’m generating my own synthetic and real life data to train the models and I have already pretrained models that can predict certain aspects of performance for 3 locations in the US. Results are quite encouraging (you can see some example predictions below, left image is input, middle is simulated, right is predicted) and the next step is to move to actually complex studies like thermal comfort.

I’m only partially using fast.ai for this, the cycleGAN implementation, and I’m eagerly waiting for more generative models to be introduced by the community (I will do my best as well once I get some time).

If anyone is interested in this let me know. I’m more than happy to help with my domain knowledge (environmental design) to get introduced into this important topic.

Kind regards,
Theodore.

1 Like

Thanks for the code. Congratulations on the Kaggle entry

Hi to everyone, I am on the third year of college and I have for my final work pedestrian counter on raspberry pi. Is it possible to do it with PyTorch and FastAI and perhaps some help about that how to start what to use.
Thank you and I hope soon i will have my project done and be able to publish it here.

Thank you.

First step, identify a training dataset of pedestrian images. After that, this is a fairly straight forward problem. There are most likely similar Kaggle competitions and commercial tools are available for this task. You should do some market research to get ideas for implementing your project. After that. Work your way through the fastai image classification and segmentation lessons. Best of luck!

Thank you, didn’t mentioned that project is mentioned to be done by me, and by that I mean coded by me, and i want to learnDeep Learning, PyTorch and FastAI.

Thank you for your response Jeff.