Share your work here ✅

This is great! It seems not easy to teach a machine how to tell when a person is happy or not. I’ve been working on something similar. I want to classify images of people when they are sad, happy, or angry and the best accuracy I could achieve so far is only about 68% :sob:

Butterfly classification

Butterflies are very beautiful. I’ve often wanted to know the name of butterflies that I observe in the wild. In my home country of Malaysia there are over 1000 identified species of butterfly, each with its own distinct features and coloration. Making a classifier to help with conservation and casual identification seems like worthy project to set myself to.

To start with, I made a classifier that identifies a butterfly by its family-rank. This also seems like a good case to benchmark the fine-grain classification capabilities of the resnet34 model, as the visual difference in features between butterfly families are extremely small.

These are the notable visual features that distinguish each of six families:

I acquired ~200 images of examples of each family from Google Images using the wonderful little javascript tool written by @melonkernal to exclude irrelevant images and collect the image urls for my dataset.

After initial training this is the plot for the learning rate:

Here is training after unfreezing with the confusion matrix:

The accuracy I got for the final model was 83% which seems pretty good for a fine-grained problem, but I think I overdid it on the number of epochs. The error rate is going up and down, which I think is a sign that it’s overfitting.


And here are the top losses:

The gist for my notebook can be found here

I deployed the app to heroku by following the example of: @simonw @nikhil_no_1

What I would do differently:

  1. Improve the data quality. Many of the most confused images were dirty, I should look into more ways of cleaning the data to improve accuracy.
  2. Automate the building of my dataset. I’d like to be able to scale up to 1000 classes if I take it further and attempt species-rank classification.
  3. Fewer epochs. With transfer learning, we already have pre-built weights that work pretty well. I didn’t really improve the accuracy very much after running too many epochs.

I’d love to hear feedback!

It’s my first time getting into but I feel like I’ve learnt a huge amount from just working on a single problem. Thanks @jeremy @rachel for making this course available for us!


I got most of the data using search engines and listing sites. I did also use VMMRdb (which I believe sourced its data from Craigslist), but since it’s US data I only extracted the data relevant to Australia.

1 Like

I used resnet50 and created an image classifier that tells you what type of martial arts someone is potentially practicing. I used the following types:

brazilian jiu jitsu
muay thai
tae kwon do

I was expecting it to have problems with bjj/judo and boxing/muay thai. The results are quite stunning.

Example inputs:

Confusion matrix:

After running lr_find, I got the following results:

Total time: 04:48
epoch  train_loss  valid_loss  error_rate
1      0.146283    0.391842    0.142212    (00:23)
2      0.160035    0.454250    0.158014    (00:24)
3      0.182352    0.466424    0.142212    (00:24)
4      0.200035    0.458152    0.142212    (00:24)
5      0.219165    0.472904    0.148984    (00:24)
6      0.177640    0.443418    0.128668    (00:23)
7      0.146477    0.357820    0.108352    (00:24)
8      0.118247    0.382005    0.115124    (00:24)
9      0.090084    0.347515    0.108352    (00:23)
10     0.071671    0.363055    0.108352    (00:23)
11     0.057950    0.387287    0.124154    (00:23)
12     0.044172    0.372846    0.115124    (00:23)

I agree; trying it in practice if it doesn’t get it right it often gets very close in a subjective sense.

I’ll look at rolling up models into series for e.g. Mercedes and see how the accuracy works. I didn’t know about the Image Relabeller, that’s fantastic!

I did a image classifier to identify the classical dances of India. Got accuracy around 88%. I posted it as a blog here

1 Like

Okay nice. Did try multiple crops as I mentioned in my post:

1 Like

Hello everyone!
I previously worked on the alligator vs crocodile app. Here on top of which @Lankinen wrote an amazing post and article Even though it was working fine but I was not satisfied with the testing results. So I decided to make version2 of the app (again focusing on Deep learning part. I am not a UI person). So I collected around 4k images (2k crocodile and 2k alligators) and trained my model with close to 97% accuracy. After that I deployed it on Heroku. You can find the app in action here. Interesting that this problem of classifying them is harder than I expected.

What is interesting is, I put my both models weights. V1 for previous weights and epoch 36 and epoch 72 for V2 model. I ran my model on all test images and saved the results in database provided by the Heroku itself. I saved images and now at the end of each day, I am resetting counts and fine-tuning model. (Someone asked the exact same question in today’s class). I am planning to do that weekly instead of daily. I have a medium post write-up Still a draft

You can check the current by appending /check_counts in front of URL. Check counts of the model.

This is a really learning experience for me. I deployed a Heroku app with heroku database in a docker container and trained 2 different models. Learned a lot.



Super intelligence will come, when AI will be able to predict if the parathas are alu,mooli or something else without tearing it apart


Released some starter code for the Human Protein Atlas competition on Kaggle!


Lesson-2 doesn’t work on Google Colab, it throws a Memory Error. try it on,it is free for users until 31-12-2018 more info

1 Like

No I just relied on the data augmentation in fastai. It’s really great how you’ve shared all your work; it’s so accessible. With the multiple crops did you make sure that all crops of the same car ended up in the same split (all in train or all in validation)?

Presenting the Image Classification App college kids need. It classifies the sore on your mouth as either a Cold or Canker sore. :smiley:
I used google images and gid2s to scrape them for my data and got an 80% accuracy although the dataset is small and not sure how robust the model really is right now (70 images for validation).


Deployed it on Zeit. Can check it out here:

I wrote a small medium post about his here:


I didn’t do something like that. Just feed them as multiple images for the same category.
I use a subset of these images as the validation set.
But I should have use the original rectangle image. (Anyway, fastai applies a center crop for that)

Hello everyone,

I made a simple binary classifier to identify building architecture styles - Gothic or Renaissance.

Images were downloaded from Google. I ran the model on resnet34 and got an accuracy of 90.5%. The source is available here . Feedback and suggestions are welcome.


When loading the image data, try changing Num_workers to zero, it slows it down, bhtnits possible to train something

If you change num_workers=0, still learn.lr_find() is Interrupted.

Hey everyone,

After being inspired by @suvash and @nikhil.ikhar, I decided to look into Arabic handwritten characters. I found a dataset published last year for Arabic handwritten characters where they achieved a SoTA of 94.9%.

After a couple of hours of work with the fastiai library and making some tweaks and fine-tuning, I was able to hit an accuracy score of 96.9% :muscle: :boom:

I’ve collated my work in a notebook and pushed it to my Github if you would like to take a look -


I work on time series data for system and business monitoring. The most common way to work with time series data is directly on the time series data points and use things like moving averages, regression, and neural networks.

I have always wondered how well it would work to simply convert the time series data to images and use convolutional neural networks for image classification. The intuition is that “we know an anomaly when we see one” - so why not just do that?

TL;DR is that using on time series images I generated, I have been able to consistently get to around 96-97% accuracy on this task.

This is kind of amazing because the time series data I trained on are from different time series domains (like service API latency versus purchase volumes) and generally the thought has been that we need to fine tune for each domain.

Here are some examples.

Anomaly: this time series has a spike toward the vewry end. I generated the images with some buffer at the right. I may experiment with making this lag window narrower in the future.

Normal Time Series: This example is normal at the right edge, which is where we want to detect anomalies. The spike toward the left might have been an anomaly at that point in time, but we are not interested in that now.

I have roughly 100 anomaly images and 400 normal images.

Notebook is shared here.

Training results:


Hi everyone!

I managed to make an immune cell classifier and I’m serving it at floydhub thanks to @whatrocks. I got my data from Paul Mooney who made it available in kaggle. The model has accuracy of about .95 but I still think I can make it better. Check it out here (Immune cell classifier)[] In case I’ve put it offline here’s the screen shot

Sooo excited that the library has enabled me to achieve this so quick. It’s been a dream of mine to make this for months. Here’s the repository with code and the model