Share your work here ✅


#862

Thanks for replying, I was really interested in your data collection process because you obviously have to be an expert to be able to tell apart the real thing and fakes, as counterfeiters have become pretty good.

And to the best of my knowledge, I don’t think the luxury industry has found a way yet to protect customers and their brands against counterfeits though they’re exploring things such as blockchain.

It’s definitely a cool project and if you were to continue, I would be happy to brainstorm with you about this.


(Jeremy Howard (Admin)) #863

FYI you’re under-fitting, so should train for longer.


(Jeremy Howard (Admin)) #864

That’s not quite what you said on twitter - what @sandhya says here is correct however. But more importantly, what @sandhya has done here is much more respectful of the work that @JCastle did.


(Jeremy Howard (Admin)) #865

Can you tell the difference from the photos? (If it’s a task that humans can’t do from your data, then generally a computer won’t be able to either, so you want some kind of human baseline to compare to.)


(kelvin chan) #866

That “downplaying” apology applies to me, you are doing the hard work here as well and I am just a bystander.

The problem is not so simple. There’s a 3 way split as far as I know. Train, validation and test set.

Train - seen by model and used to adjust weight
Validation - not seen by model and used to continuous evaluate progress, but lt is seen by you, the human decision maker.
Test - not seen by model, not used during training in any manner, not even printing out its loss or accuracy. It is used only at the end running only once. Ideally, this data is hidden under a rock until your finish all your work.

This is really the most honest way with probably no chance of accidental overfitting. And of course, this is how Kaggle work as well. You don’t see their test set while u work, and only get a number reported back to you.

This point may be explained further in fastai, I am not sure. But if not, you can easily google it up.

Having said all this, I get JH point. There’s a good chance you achieved equal or higher accuracy than the paper, cos u aren’t tuning the hyperparam like mad, only very mildly by LR or epoch num if so. But I would be careful if the claim is compared to a research paper, and then mentioning it on Twitter, then the result will have to be defended more. I perfectly understand we are all just learning, I just shared this cos I ran into bad things before by leaving out work.


(Stefano) #867

The post you’ve cited is very interesting: I definitely want to take a closer look at it! I agree with the author that having “unkonwn” people in the validation set let you better measure how the model generalizes.

My notebook is an “initial trial” / “starter kit” for a generic object classification competition in kaggle with fast.ai, not an attempt to win it.


(Stefano) #868

Take a look at your data augmentation:

image

In this sample you’re probably using a wrong label because the visible dots are less than 45.


(kelvin chan) #869

I naively knew sourcing the data this is hard, but after hearing your experience, it looks even harder. Making an app to serve customers and then collect the data is probably the way, assuming you have no deep connection in that industry. I had thought about a cannabis identifying app, and u bet it’s not so easy to collect large diverse dataset, before legal trouble, even if you live in Canada. :grinning: I googled sativa and some well know brand, and I already got discouraged. This is another area you probably need some connections with the right biz ppl. For counterfeit bags, it will help if you have a personal collection, and just take hell lot of photos.


(Cory) #870

I can’t tell the difference between a good knock off and an original. However, it seems that there are people who can. Often a fashion blogger/account will post a “can you tell the real from the fake” like this one the majority of people who comment are able to spot the fake, so I assume it has to be possible. Sending one of these bloggers the same images and seeing if they can beat the classifier would be an interesting exercise.


(kelvin chan) #872

If you have time, take a quick look at course 3 of Andrew Ng Deep Learning specialization on coursera. He had a very in-depth discussion on human baseline, Bayes error, and super human performance. He used radiologists as a discussion case,


(kelvin chan) #873

Another thing is. What’s the prior chance that given any bag you see, it is a fake. You may run into same issue for cancer diagnosis. So you ultimately may want F1, precision, and recall, in addition to accuracy. This may also vary from region to region. If you visit China, it may be much higher than in the US/Canada.


(Jason Patnick) #874

I just posted the notebook with 95% accuracy of the CamVid-Tiramisu dataset to my github.

I also included a notebook showing how I converted the labels of the CamVid dataset to match the labels the Tiramisu paper used, and some other training I did.

This is the first personal project I’ve shared. Any feedback about my project and/or github is welcome :slight_smile:


(Pierre Guillou) #875

My post on how to create a Web App on Render from a fastai v1 model (thanks to @anurag for this possibility) with a focus on local testing before online deployment (for the occasion, I’ve reloaded an ImageNet classifier).


(Kaspar Lund) #876

yes that’s what i meant


#877

Hi - first post here. In the spirit of sharing “also the small things”, I’m just putting my web scraper code for Wikipedia here: https://github.com/NicolasNeubauer/fastai-stuff/blob/master/scrape_wikipedia.ipynb

If you find a table containing image references and a column you want to use as a label, you can just use this code, adapt the extraction code, and it’ll write this into a “from_folder”-friendly format.

edit: wrote this before watching the 2nd video; obviously could use more fastai-internal tools.


(building render.com) #878

Nice work @pierreguillou. I think I’ll borrow your idea of documenting and testing locally before deploying on Render and update the deployment guide.


(Josh Varty) #879

Yes, I framed it as a classification problem as opposed to a regression or object detection problem. I wanted to see if it was possible to distinguish between different classes of images when those images were composed of things with identical features (eg. all circles).

I played around with repeating this as a regression problem and noticed that it didn’t seem to generalize beyond the ranges on which is was trained. For example, instead of limiting the number of elements from 45-50, I tried using a range of 1-50 and optimizing over the mean squared error.

While this worked very well for images within the range of 1-50, it didn’t generalize very well outside of it. The network would estimate values much, much higher for counts it had never seen before.

Both of these approaches are probably not ideal for “counting” and something like object detection would probably make more sense, but I wanted to play around with edge cases of convolutional neural networks in order to try to learn new things about them.


(kelvin chan) #880

That’s very interesting, I assumed you have also regularized it with weight decay or dropout. If it always overestimate if data is higher than the upper range, you can experiment with predicting log(count) instead of count, or other transform, and see if it will do better. The hunch also comes from the fact that u r predicting a float with regression, and count is an integer. This is making it hard for your CNN.


(kelvin chan) #881

In addition, I didn’t check your notebook in full. Just curious if you can generate an input such that it predicts a negative count. I mention this cos you may want to build in a design, such that nonsense like this won’t happen. And doing log or exp or whatever may make that happen.


(Josh Varty) #882

Yes those are all good suggestions. With the naive approach I used you can definitely generate nonsense. When I trained for only a portion of the range (10-50) providing low counts like 1 or 2 resulted in negative numbers.