Share your V2 projects here

Since it’s almost Halloween, I made a Harry Potter character recognizer app. Here’s the repo. It should be able to match
people = [‘Harry Potter’, ‘Hermione Granger’, ‘Ron Weasley’,
‘Dobby the House Elf’, ‘Minerva McGonagall’, ‘Ginny Weasley’, ‘Dudley Dursley’, ‘Lord Voldemort’,
‘Luna Lovegood’, ‘Bellatrix Lestrange’, ‘Remus Lupin’]
within 15% error rate. I wanted to try something with a lot of categories. I could test with more data but it’s more accurate than I thought it would be. Bing’s image search is fun.

1 Like

So yeah, i created my first model using fastai, it’s pretty simple and straightforward, you just upload an image of the interior or exterior of a car (only works for BMW/Mercedes/Audi), and it tells you what brand it thinks it is and if it’s the interior or exterior. you can try it out here:

just click on the binder button (it takes some time to launch tho, some patience is needed).
That’s pretty mush it, also i’ll be writing about it on my first blog post here (did not write it yet):
Here’s some pictures of it:
car_classifier_in_action_exterior car_classifier_in_action_interior

My first blog post using fastpages.
Performance comparison between fastai ULMFit and a basic PyTorch model for StockTwits sentiment data classification. Results are close but fastai ULMFit wins by about 3%.


Marcos, Congrats! When I went to access the notebooks at the link near the end of the paper (Marcosuff/projects), I got a 404. Is there another way to access the notebooks?

This is really cool! Do you have a notebook you would be willing to share?

We recently managed to deploy a FastAI trained PyTorch model in TorchServe. In case anyone runs into a similar situation and looking for a recipe:

Let me know if you found it useful.


I’ve added Tensorboard Projector support (currently word embeddings and image embeddings) to fastai’s tensorboard integration. Here’s a notebook that shows how to use the TensorBoardProjectorCallback to visualize image embeddings in Tensorboard Projector. You can run the notebook in Colab.


If you have any questions let me know :slight_smile: .



Did you try to use the attention flag enabled on the Unet? It greatly helps on perf, but I was unable to use on inference, “Illegal memory access everywhere”

Do you mind to share your code snippet and full error trace msg?

I run the following with no issue:

FastAI Training

config = unet_config(self_attention=True, act_cls=Mish)
learn = unet_learner(dls, resnet50, metrics=acc_camvid, config=config)

PyTorch Model Definition

class DynamicUnetDIY(SequentialEx):
    "Create a U-Net from a given architecture."

    def __init__(
        img_size=(96, 128),
        self_attention=True, # Here
        act_cls=Mish, # Here


>>> time http POST @sample/street_view_of_a_small_neighborhood.png

HTTP/1.1 200
Cache-Control: no-cache; no-store, must-revalidate, private
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Pragma: no-cache
connection: keep-alive
content-length: 16413
x-request-id: 35600873-8657-4998-b822-26340bf2bd1a

  "base64_prediction": "GhoaGhoaGhoaGh...RERERERERERERHh4eHh4eHh4eHh4e"

real    0m0.386s
user    0m0.309s
sys     0m0.030s

Hye, thanks for your answer and awesome tutorial.
My issue appears to be solved setting:

torch.backends.cudnn.benchmark = False

I have another question, why you reconstruct the Unet on your inference code and not just use torch.load. in my fastai code I do:, '')

and then on torch just do:

model = torch.load('')

with this, the UNET comes in the file.

1 Like

Thanks! Great tips that would be very helpful to whom runs into a same issue.

As to the reconstruction, the reasons are:

  1. For TorchServe eager mode, it requires its explicit definition
--model-file MODEL_FILE
Path to python file containing model architecture. This parameter is mandatory for eager mode models. The model architecture file must contain only one class definition extended from torch.nn.modules.
  1. According to the answers below:

This way is still not bullet proof and since pytorch is still undergoing a lot of changes, I wouldn’t recommend it.

Good to know, I do think the answers is pretty old.

  1. I was able to run torchserve without passing the model definition, only passing the saved model as I showed you above. As you say, probably it is not the right way, so I converted the UNET to torchscript.
  2. Do you have any tricks on how to make inference work on FP16?

I am curious how the autonomous people do segmentation at 30 FPS with FHD images. This UNET type of models are heavy as hell, the forward pass (eval mode) for the resnet34 unet on 1 FHD image takes 15GB of VRAM!!

Hi Raphael,

I enjoyed reading your article and the code is working great with your dataset. Unfortunately, I’m having problem when doing the Inference (under Windows) when using my own dataset (jpg images). The model give wrong/different prediction on Windows when comparing the same image on Colab.
On Colab, I got an accuracy of ~ 90 %.
Some parameters needs to be changed maybe something below !!!
var imageInfo = new SKImageInfo( …

Thanks in advance

Hi all,

I’d like to share with those of you interested in time series tasks that I’ve just finished a major update to the tsai (timeseriesAI) library and added lots of new functionality, models and tutorial notebooks. It now works well with fastai v2 and Pytorch 1.7.

If you are interested you can find more details in this blog post or in the repo.


Hi all,

This is my first fastai app, it’s called black bird detector and was trained to recognize between black birds, ravens and crows. I used 100 images of each bird species to train the transfer learning model with a resnet18. After adding image augmentation the accuracy is roughly 88%.

The app is rendered with Voila and runs in a Binder backend. You can try it out here: black_bird_detector




I am using fastai in this training software to drive this hardware.

Captured images predict throttle and steering to navigate my basement track. I am doing the training on a NVIDIA jetson Xavier. Training data is collected by controlling the car with a bluetooth Xbox game controller. An onboard NVIDIA jetson Nano does the inference when the car is driving itself.


Hey guys,
I’ve done an in-depth tutorial on Image Colorization using U-Net and conditional GAN and published my project on TowardsDataScience.
You don’t need fancy GPUs or huge datasets to train this model. I’ve developed a strategy which allows you to train the whole model in less than three hours on only 8000 images and still get great results!

Here’s the link:
I’ve also uploaded the project on my GitHub repo:

You can directly open it on Colab with this:

I hope you enjoy it.


I’m still read the blog post but wow, this looks really great! Thank you for sharing!

1 Like

Thank you! Happy to hear that

Hi moein hope your having a wonderful day.
Wonderful work, well written clear, precise and enjoyable!
Cheers mrfabulous1 :smiley: :smiley: