Share your work here ✅

Link is broken.

Link is OK for me.

Grate job!

1 Like

I tried classifying all the 81 fruits on the same dataset. Almost 100% accuracy on validation set, even without training the lower layers. Maybe the fruit shapes are very basic, and so such deep NN is not required.

1 Like

Interesting observation, I think it is worth a shot to test it and experiment how model does with grayscale images.

1 Like

Hi Everyone!

A little more on my satellite project. (a lot actually :slight_smile: )

First - you can try it in a nice-ish webapp: yourcityfrom.space

I’ve played around with a couple production and serving pipelines and ended up serving everything the old-fashioned way from a digital ocean droplet. The code and a little more about the backend and frontend are in the github repo

More interesting is the fact that when confronted with examples outside the initial dataset (which was split in training/test), the perceived accuracy is much less than the 85% accuracy on the validation set.

I think that’s very representative of trying to use DL in the real world, where you collect data, split train/val, and then get so-so results when model is in production.

Anyways, I spent a bit of time trying to figure out what was going on. It comes from the data collection methodology. To make a long story short, the 4000 images in the original dataset were sometimes “too similar”; essentially patches that were next to each other geographically ended up getting split into train and validation set, which boosted the validation accuracy significantly, but didn’t generalize to images collected using a different method.

I thought that was a super interesting teaching and am looking forward to seeing if I can fix this by improving my data collection methodology! Will keep you posted.

35 Likes

Hello,
I have created a very basic web app (with as less HTML as possible and I hope to improve UI) that classifies Alligator vs Crocodile (Apparently they are more similar than I thought) with a total of 400 images. Please give it a try. I am also planning to train a model on more images. :slight_smile:

Thanks for trying.
EDIT: I have removed the link and moving my app to more stable solution like Azure Webapp or Heroku! Also I am working on the v2 (training on more data) Thanks everyone for your support.
EDIT2: Please find the updated app post here.

3 Likes

Does it sting? (Inspired bij Jeremy’s teddybears)

Hooverflies are well kown for their mimicry. Hooverflies protect themself by looking a lot like wasps and bees, but …they do not sting!

It would be very helpfull to know if a wasp like insect will sting or not :-)…so I had a go and downloaded a few hundred pictures of wasps, bees and hooverflies from google

Using the “four lines of code approach” of lesson 2, the model reached over 95% accuracy

image

It even recognised this as a bee:

and this as a hooverfly:

image

(Both not part of training or validationset)

DL is fun!

Ad

5 Likes

this is sick! thanks for sharing :slight_smile:

2 Likes

This is wonderful! :slight_smile:

2 Likes

Great discovery. Here’s some useful info on that topic - hope it helps:

https://www.fast.ai/2017/11/13/validation-sets/

3 Likes

Just wanted to share my small contribution to the library. When I’ve tried to classify larger dataset (80k images, 340 classes) I got a memory-related error caused by underlying pytorch function call. I’ve introduced a new parameter that allows to overcome this obstacle: https://docs.fast.ai/vision.learner.html#Working-with-large-dataset

8 Likes

Hi everyone,
Instead of doing the Lesson2 homework-which was trying web deployment of a model, we- a few members of the Fast.ai Asia Virtual study group are trying building a mobile app (everything to run on the phone) : “Another Not Hotdog” app, but using PyTorch.

I’ve introduced the complete plan in a blogpost here.

@cedric has already created the framework needed for majority of the steps needed for step 2,3 mentioned in the blogpost.

You can find the Fast.ai camera repository Here.
Here’s a little video demo:

I’ll be thankful for any feedback.

Best Regards,
Sanyam

32 Likes

Then should we stop using ImageDataBunch.from_folder(...,valid_pct=0.4,..) ? Because I too might be having the same problem as henripal, it is giving as good as 95.6% accuracy on my native language character recognition of 84 classes with characters having sidebars and loops (which might be the State of the Art, I have to check more rigorously though), but guess what val_loss > train_loss; could this might be the reason my model is still underfitting? I’ve tried tinkering with lr, turned out it was fine, increased the epochs to as high as 10, which brought marginal improvements (bringing down the loss difference from 0.05 to 0.02 ) but now is taking 5 hours to train in colab.

Wrote up a blog post drawing on some ideas from lessons 1 and 2.
Appreciate feedback, comments. Also, more concerned if I’m revealing too much of the course material or not crediting properly.

1 Like

No, that’s not what the blog post says. Take a deeper look and tell us what you find! :slight_smile:

PKrs_currency

Pakistan currency classification using deep learning, The model achieved a classification accuracy of 98.5%.

Python Notebook for training on this dataset

The link to gist of the PKRS_Classifier.ipynb

The link to github PKRS_Classifier.ipynb

This code is based on code from a fast.ai MOOC that will be publicly available in Jan 2019.

Dataset

I wanted to see how accurate a Neural Network would be at categorising images of currency. Being a resident of Pakistan i wanted to train it on the Pakistnai Rupee but a quick google search discovered that there were very little camera images of the different Pakistani notes on the internet so i created the dataset myself.

The different bank notes in the dataset are as following :

I took the 6 images below from this blog post, images in the dataset are my own.

Creation of Dataset

The first step was to find as many currency notes as i could, everyone in the family lent me their riches for science!

I then took pictures of all the notes and divided them in train and validation set, the distribution of the dataset is as following :

Category Train Valid Total
ten 101 44 145
twenty 12 6 20
fifty 15 7 22
hundred 14 6 20
five_hundred 2 2 4
Thousand 5 3 8
TOTAL 149 68 217
Percentage 68.7% 31.3% 100%

The data was distributed into a dataset directory structure as shown below:

alt text

4 Likes

Cool project @G_M! Kinda interesting what happened with your errors and accuracy after unfreezing. It looks like the model started to overfit quite a bit (epochs 8 to 13) and then corrected and stopped overfitting?

Did you try training for fewer epochs before unfreezing and then unfreeze and train to see if that gives better results? I’d be interested to see if that gives you anything noticeably different.

1 Like

Thank you @prratek

I am not sure why that is , maybe because after unfreezing the learning rate for the starting layers of the network is 1e-5 which is maybe very low that results in the weights to be adjusted slowly, the first 8 epochs were not able to change the weights of the starting layers that much and hence the effect of them changing occurred after the 8th epoch and then by the 13th epoch the gradient decent had changed the weight’s enough in the right direction to start improving the accuracy again. I am just guessing here :slight_smile:

I have not tried that but it sounds interesting and i will try that. I am not sure what effect that might have.

Definitely one of the more meaningful things to do with the whole family savings. :wink:

3 Likes

No one is willing to give me all their money when i ask them, i don’t know why though :stuck_out_tongue:
They eventually come around to the idea.

1 Like