Share your work here ✅

Nice! In Part 2 of the course, most of the fastai library is re-implemented in S4TF. Luckily, you won’t have to worry about learning Swift! :wink:

1 Like

I trained a classifier to discriminate between Chanterelle mushrooms and Jack-o-Lantern mushrooms to 85% accuracy.

Chanterelles are delicious. Jack-o-Lanterns are poisonous!

Great course, thank you for all of it :slight_smile:

1 Like

@quan.tran @joshfp FYI the article just got featured in Towards Data Science:


Using Google Images, I created a dataset of handguns, namely glock, revolver and desert eagle. I trained a classifier and it got 96% accuracy!
Here is the link:
Handgun classification

Excellent job, I have been thinking of doing a regression example with a Dataset I could understand easily, it’s a perfect example.

Many Thanks mrfabulous1 :smiley::smiley:

I’ve taken Lesson 3 CamVID image segmentation, and Planet’s image classification lesson, to create an image segmentation to detect building footprints from satellite images.

There is a lot of training data (tens of Gb of high-resolution images) from the Spacenet competition, but mostly I wanted to do one project on my own, so I took ~6k chips of images in Rio de Janeiro.

The hardest part was to convert the .geojson footprints into images (I used parallel rio rasterize), and then fiddle to avoid running out of memory. Overall I’m quite happy, here is the notebook, and the results:

left ground truth, right prediction.


I tried this code and it works for training with .fit_one_cycle(). However, when I try to run the learner in .get_preds() mode, the callback is not being run for some reason. Is there a way to make it run in prediction mode, and not only for training?

Update: I am trying to collect predictions for a pre-trained model, so I am avoiding changing weight, this is why I am using .get_preds(). Would using .fit_one_cycle() with learning rate of zero effectively achieve the same result?

.get_preds() expects an argument of what you want to get predictions for. Are you passing it a data object?

Yes, I am passing data.train_ds to .get_preds(). The problem is that the callbacks are not working as I would expect with .get_preds() as opposed to working well with .fit_one_cycle(). For now what I am doing is setting learning rate to zero and using .fit_one_cycle() to record activations without training the network.

Hello everyone

I recently wrote a medium article on the integration of Fastai with BERT (huggingface’s pretrained pytorch models for NLP) on a multi-label text classification task. After that I compared the performances of BERT and ULMFiT.

Here are few things which I did to integrate Fastai with BERT:

  1. Using BERT’s tokens and vocab
  2. Some modifications in BERT’s tokens for eos and bos parts
  3. Splitting the model for discriminative learning techniques

Here is the link:

I was amazed by the level of accuracy using just 2 epochs:

  1. BERT - 98% accuracy
  2. ULMFiT - 97% accuracy

I would be glad if you have any feedbacks or comments on this.



Copy of the post from Share your work here (Part 2) :

Hi there!
Check out my recent blog post explaining the details of One-Cycle-Policy (fastai’s famous method .fit_one_cycle() ): The 1-Cycle-Policy: an experiment that vanished the struggle in training of Neural Nets.
Efforts have been made to make the entire things as simple as possible, especially explaining the codes. Hope you will enjoy it :slightly_smiling_face:

very cool! well done! A great test would be to see how it works with hurling :smiley:

Hi everyone, from lesson-2 concepts, i created an emotion classifier, which jeremy talks about in the video.
Just by changing the wight decay, i was able to get around 66% accuracy, and the top values as jeremy says in the video are around 65.7%.
As i am using google colab i am not able to clean the dataset using the data cleaner,3rd party app, but still was able to get a pretty good accuracy.

Is there any other ways to clean the dataset which is supported by google colab?


What Programming Language Is It?

Programming Language Classifier

After watching lessons 1-4, I decided to make a web app that classifies text according to the programming language. I searched the web, but couldn’t find much research on this topic. There was one example ( that uses a Multinomial Naive Bayes (MNB) classifier to achieve 75% accuracy, which is higher than that with Programming Languages Identification (PLI–a proprietary online classifier of snippets) whose accuracy is only 55.5%.

I was able to reach about 81% accuracy(according to fastai) (although I’m not sure I’m measuring it the same way as the paper) after following the same basic steps as the IMDB example. This was done with the dataset I found from the author of the paper, here: I noticed that the dataset is pretty messy and a lot of the css/html/javascript is misclassified as another one in that group. That is apparent in the confusion matrix:

But, regardless, I made a web app, which is currently here: It lets you paste in a snippet of code and it will tell you what language it thinks it is. Give it a try and see if you can confuse it! I created the web app based on the template here:

I did find another data set of classified code on Kaggle, which is way bigger (called “lots of code”) (and hopefully has less misclassifications) that I am going to try to use to improve the accuracy even further.

I am having way too much fun with this course. Thanks everyone!


Please highlight your specific reply so i can go back and follow that thread. Thank you

Excited to share work predicting 12-year mortality from chest x-rays. Deep learning can extract prognostic information about health and longevity embedded in routine medical imaging. Made using fastai.

JAMA Open manuscript
Editorial podcast


Nice project! Can you please share your notebook / kaggle kernels / github repository for this work?

@NathanHub and I recently participated in the Freesound Audio Tagging 2019 Kaggle Competition and received our first bronze medal (95/880).

We used FastAI and XResNet in our model. I’ve written up a more complete description here:

Our repository with notebooks:


Fastai Active Learning

tl;dr First attempt at active learning for fastai library. Includes random, softmax, and Monte Carlo Dropout std calculations to measure the uncertainty of an example for a machine learning model.

I recently came across a paper called BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning and the following blog which was a segway to explore active learning.

I thought it would be cool to have active learning as a part of the fastai library. I am not an expert on active learning but thought it would be a great way to learn about the field and different algorithms. P2 of the fastai lessons has been very helpfuf in implementing the gist.

The implementation separates the uncertainty measurement from the active learning selection process. There are two options for selection:

  1. Select x most uncertain examples from the entire dataset
  2. Select x most uncertain across all batches in dataset.

I hope to continue working on the gist and fully implement different papers. If anyone wants to contribute or finds bugs feel free to pm me.

Shout out to @mrdbourke for implementing Monte Carlo Dropout in fastai.


Hi everyone,

I’m happy to present v1 of a comprehensive intro tutorial to geospatial deep learning (focused on building segmentation from drone imagery in Zanzibar) using fastai v1, the latest cloud-native geodata processing tools, and running fully self-contained on Google Colab for ease of learning (and free GPUs!):


Conceptual overview:

Colab notebook (previewed in nbviewer):

More info & highlights over on the geospatial deep learning thread: Geospatial Deep Learning resources & study group

Given there’s a lot covered here, I’m sure that I missed many things (bugs, mis-assumed knowledge, janky code, bad links). I appreciate any and all of your feedback to make the next versions of this tutorial (and next ones) even better so thank you in advance!