Share your work here ✅

Thank you I’m actually more interested how 1 channel is interpreted by model and how is affecting learning ?


@etown I have gotten it to convert from pytorch to ONNX, but I got an error (not supporter Gather) while trying to convert to CoreML. Where were you bottlenecks?

Cool! I’d love to see a walk-thru on your blog of exactly how you created that Azure website :slight_smile:


Has anyone successfully converted an ONNX model to CoreML for an iOS mobile app? I’ve spent a few hours with little to no luck.


Here you go, hopefully it’s sufficient to get others trying it out.


Hi, I have made a healthy vs junk food detector app.

I am able to get between 85-90% accuracy even though the data which I downloaded off google is quite noisy.
One difference between this and other classification tasks, for e.g. different dog breeds, is that even though two categories can look same, the output is singular, i.e. it can be this or that. Never a mix of the two. When it comes to food, that boundary can be blurred as it can be a bit of both.

I started off with ~500 of each category using query like “health food dishes -junk” and “unhealthy food dishes -healthy”. With some little cleanup I got around 90% accuracy. However it had a limited view of food items and mostly consisted of your regular junk food like burgers, fries etc on one hand and salads on the other. So next I consciously picked 4 different cuisines, namely, american, italian, indian and chinese and download healthy and junk food images for each of them. Next up I added some sweets in the junk food category and greens in the healthy category. Even with that I’m able to keep the accuracy between 85-90%, which to me is quite good.
I was worried if the model was too biased with the green color, so I picked out couple of green junk food. First was this avacado burger and to my surprise it classified it correctly. Perhaps it is more biased towards burgers :slight_smile: Next I gave a green cupcake, and it failed to identify it correctly. I noticed that in my training data, I did not have images of cup cakes. So it’s just a matter of adding the right set of data and the model will somehow magically extend.

Here’s the confusion matrix with 3500 odd healthy food images and 2400 junk food images:

And here are some of the top losses:

While some of them may be unclear even to us whether it is healthy or junk, few others are definitely mis-classified.
For e.g. I have labeled few pop-corn images as healthy and this one I left it as junk since I think it is caramelized. :slight_smile: The 3 in middle column are definitely mis-classified.

I have taken @simonw 's code and enhanced it to deploy on Heroku. Here are a few screenshots:

I have put all the necessary code to deploy it to Heroku in my github repo along with a detailed write-up so I hope it helps some of you to deploy your own app.

Lastly a big thanks to Jeremy & Rachel along with other folks who made this course what it is today. I had done the first few chapters of v2 course and I can say v3 is really awesome and this thread is testimony to that. Cheers.


I have got windows 10 home edition. Is it possible to use docker CE on it? It is suggesting me to use docker toolbox and I am unable to proceed with that.

Technically one can run Docker CE on Windows 10 but considering you want to deploy it on Heroku (or any other platform), I’d suggest use a linux installation. Docker on windows can only run windows apps. Use your google account to run on GCP with $300 worth of free credits. Best way IMO.

Great work! Nice to see an improvement in accuracy on the previous good result. What do you atribute this to? Superconvergence? Better data augmentation?

Thanks for providing the code on Github! I had to change a few things to deploy it on my local Ubuntu machine (read data from csv, change interface) without an docker installation. Just run


and now you can predict plant leaf types via the web. This is awesome! =)


Thanks Erick!
I believe that the onecycle policy was the main responsible for the improvement. I don´t know if the new fastai has a different augmentation feature, but if this is the case, then it sure helped too.

Yes - agreed that some kind of deeper exploration would be interesting; visualizing the layers or even something like this article

Right now I still have a problem with my train/val split and major leakage.Will fix first then explore those solutions. Also if you have any other ideas I’m a taker!

Superconvergence? Explain pls

Some Face Expression Recognition (FER) with fastai v1 :


Been working on a Classifier of 13 different Brands of Swiss Watches… got a 0.280851 error_rate, after 7 epochs and tuning the learning rate…will experiment with more epochs.


I have been working on American sign language dataset. I have used resnet34 model and I have got an accuracy of 99.97%. I have used opencv for making live prediction of signs made by hand via. webcam. Here are some of the top losses

Here is a video of my ASL-live-predictor:
ASL live predictor

Link to ASL-live-predictor github repository:


Look at references from

1 Like

Here is another example of application of Fast.AI library to the problems in cancer genomic domain. The problem is discriminating between true and false variants detected by automatic workflows in tumor-normal cancer sequencing. Comments, questions are very welcomed, Thanks!


Well, that’s just pretty awesome.

I am impressed by 1 cycle policy and superconvergence methods do automatic so I write a blog on it and it’s is my first blog can fastai community give me feedbacks about blogs or am I getting the concepts correctly to relate superconvergence and regularization.
I just published SuperConvergence with inbuilt regularization -