Share your work here ✅

Hi everyone,

I have created a binary image classifier to identify abnormal and normal brain images from MRs.

and deployed it

you can test it :

Thank you so much for all your help and support.


I used the material in the first two lectures to train a classifier for detecting plant diseases through the sounds insects make (still a hypothesis). I have been working on this project with researchers in the U.K. for a while now and was having problems with analysing raw audio but was able to solve it by converting the data to images with librosa.

I trained a classifier for the different experiment conditions (e.g. infected plant or non infected plant, males only or males and females together):


And then compared the usefulness of different clustering algorithms (PCA, TSNE, UMAP). For example, PCA here:


These results look encouraging - is your validation set from plants that are well separated from those in the training set?

Hi Everyone,
I made Birds classifier based on what I learned in lecture 1 and 2 so here it is
Here I am using a pre-trained ResNet34 model.
The accuracy is 91%
The confusion Matrix

Notebook Url

1 Like

I successfully trained a text classifier on legal judgments based on the lesson3-imdb notebook. A multi-label classifier would have been more suitable for the use case I had in mind but I went ahead and trained a 19-way classifier with very strong results out of the box (82.56% accuracy), considering that there was a huge imbalance in the number of documents per topic, the number of classes (19 instead of just positive/negative) and I used most of the fastai default settings. The resulting errors made by the classifier were reasonable misclassifications due to overlapping subject matters.

What’s so amazing is that the fastai library makes it so easy to get quick results. There were frequent changes to the library in the past week as I was working on this but the actual training of the model was pretty straightforward once the library updates settled, with only a small amount of digging into the source code required to understand what was going on.

I will be presenting my results and how I used fastai’s ULMFiT this Wednesday at the National University of Singapore’s School of Computing Project Showcase (I’ve been participating in a deep learning study group there). Here’s the poster I’ve prepared for it. Will also put up a more detailed Medium post after I run a few more experiments.

I look forward to seeing more of everyone’s impressive work!

Confusion matrix

Sample judgment page


Hi All, used another problem from Cancer Genomics domain – Cancer Type Classification using Gene Expression data – this is my subject matter hence almost the same topic for all of my work :wink: This time, peaked a bit at the structured data documentation and did not convert the data into images (although I am sure we can represent this data as such). Overall accuracy is 93.9%, tiny bit better than the recent paper that addressed this problem.



Yeah. Got it Thanks! :slight_smile:

Hi everyone.
I made a wolf detector app, that can tell if a wolf like animal is a wolf or not.


I used 5k images of different wolves and 5k images consisting of non wolf images.
The non-wolf images contain a uniform distribution of all non-extinkt dog breeds and some wildlife such as bear, deer, foxes, raccoons etc to help eliminate background bias.


Are the images 4 channel??

Hi everyone.

Here is my toy app to detect what you are drinking:

If you use your smartphone you can take picture and test it on the fly.

Achieved around 90% accuracy on resnet50 with about 1.5k Google images.

Here is my github repository with scripts to make it work on your own linux server.


Dragonborns vs Tieflings

My son’s favorite activity since he started high school a few months ago is his afterschool board game club, and his favorite game is a Dungeons and Dragons campaign they play at the club. I sometimes get to hear of funny and heroic acts or complex strategies when he returns home late at night. Although the story telling is engaging, I don’t understand much of the subtle character details, their personalities, and abilities. It turns out, until two weeks ago I couldn’t even tell an Orc from a Goblin most of the time (Orcs are bigger, and Goblins are sometimes green…) or a Dragonborn from a Tiefling; I didn’t even know what a Tiefling was.

The weekend after the second fastai lecture, I became good at telling Tieflings apart from other races. Dragonborns, at advanced levels, can get wings and start to look a lot like Dragons, but, they are much smaller, they stand on two legs, they wear human-like clothes and use weapons in a human way; Dragons on the other hand are huge and very wise, think of the dragon in the Hobbit. To make things more complicated, Halflings still really look like Humans to me, but they tend to be merry and small, though reasonably easy to tell apart from Dwarfs. Since all of these creatures were invented by creative minds, their exact characteristics differ a lot more in the wild than the characteristics of well-defined cat or dog breeds, so the weekend project became an adventure involving data collection, hacking minor tools (sha1sum, FileDeleter mods), and help from my expert son. Here is a neural net that is better than I am at telling the D&D character race from an image:

My vacation pictures suggest I’m a Halfling, but my son managed to change his expression enough to pass as a dragon!


Lots of the feedback I had from the satellite app recognizing the city from the satellite image revolved around the same question: “what is it looking at in the image of my neighborhood that makes it look like it is from my country?”

I tried to implement a couple of interpretation methods I found linked on the forums (see further discussion in this thread).

The results are a little mixed, but I went ahead and added the feature to the app; let me know what you think:

Example output - you can see here that it avoids the parks and is very interested in the shape of the blocks:


I tried a similar approach with the dataset from Freesound General-Purpose Audio Tagging Challenge( using the data preparation script from AudioNet(, but my results were not good on all 41 classes (17% accuracy). Could you please share your notebook?


I used the data preparation script from AudioNet ( How did you generate spectrogram images from audio files?


Thanks! I’ll look into it.

I haven’t found anything concrete yet. For OCR I found the “Khatt Database” - I contacted the authors to get a download link and they haven’t replied yet.

The other data sets I found are not OCR - mostly Wikipedia articles, tweets, and text-messaging dumps for sentiment analysis and other NLP applications. It seems that Stanford, Concordia University.

I want to dig deeper and decide on a project to work on, it’d be awesome to work together.

1 Like


Managed to produce GradCAM activation maps (with kind support from @henripal)! Unfortunately, I have problems to incorporate them into the Flask app atm so the app is not updated yet.

Anyways, this is the interesting result for a double classification challenge (the model goes for the right guitar in it’s prediction)…

Seems like:

  • a telecaster is associated with bright fingerboards
  • a Stratocaster feature is the body horn and the tone knobs
  • a gibson Les Paul is identified by the body contour


The notebook is here:


I have been successfully converting 2D LIDAR data into images. I am using a rotating lidar inside of pipes to look for defects in the pipe. The lidar itself provides 1 measurement every 1/4 of a degree for 270 degrees. So each “row” is 1080 points of data and each data represents the distance at that angle.

A single scan of the data looks like this:

I am squeezing 200 scans on top of each other and normalizing and getting out data that looks like this:


I am doing some more math on top to get better results from my models. This math is to do a vertical diff row by row.

I’m not a huge fan of doing the extra math on top of the raw data before sending into my models for training but i have been getting better results on basic classification.

If anyone wants more information or just to chat about LIDAR please feel free to ping me.


This is awesome, thank you for sharing!

Yes! The validation set was from a separate set of recordings conducted on a different date with the same experiment. The plants were kept in sealed chambers too so there is very little likelihood that ones labeled as infected are non-infected.

I’ll do a write up as soon as possible and share. It would be interesting to get some feedback.

Hello, guys. I’d like to share my simple baseline for Human Protein Atlas Competition on Kaggle.

1 Like