Share your V2 projects here

A category to share the different projects that we have completed/plan on working.

This thread takes inspiration from Share your work here ✅.

Here is the summary mentioned again:

Show us what you’ve created with what you learned in V2! :slight_smile: It could be a blog post, a jupyter notebook, a picture, a github repo, a web app, or anything else. Some tips:

  • Probably the easiest way to blog is on using fastpages or medium.

  • The easiest way to share a notebook on github is to install the gist it extension . This will only be possible if you use a platform that supports jupyter extensions, such as GCP. Otherwise, you can create a notebook gist by clicking File->Download to get your notebook to your computer, and then follow the steps from this SO post :

    1. Go to
    2. Click ‘New Gist’ on the upper right corner
    3. Open the folder in a Finder/Explorer window on your local computer
    4. Drag the file into the text box (the ‘code space’). This should fill the space with JSON looking text for the
      framework of the notebook content.
    5. Copy/Paste the full file name (e.g., mynotebook.ipynb) into the filename box, and give a description above.
    6. Create the Gist!
  • If you want to have folks on the forum look at a draft and give feedback without sharing it more widely, just mention that in your post

  • You can also just use a reply to this topic to describe what you did - preferably pasting in a picture or two!


@init_27 and I are going to dig deep into the source code and try different combinations of schedulers/optimisers/architectures this week :slight_smile:

I have also decided to work on a separate project to classify my friend’s pet using the security camera footage installed outside his house.


I am trying to convert one of my old fastai based solution for image segmentation to fastaiv2.


As I consider my Walk with fastai2 study group my project, I have a few lined up:

  1. Implement DeepGBM into the framework
  2. Bring Multi-FiT to v2
  3. Implement DeViSe (probably the most difficult of the 3)

DeViSe is an interesting project, I wanted to do it for the last few years.


I would also be interested in working on Multi-FiT. I’ve been thinking of trying it out for a future project. I’ll start by reading the paper thoroughly in any case.


I’ll begin looking at the code here in the next week or two myself, need to finish up tabular first!

1 Like

Also, I forgot one more. I’m working on deploying my models via Starlette (so a REST API) for one of each type. I’m hoping to have it done in the next week or two with some detailed walkthroughs :slight_smile: I have image classification models done so if anyone needs help with those feel free to ping me

1 Like

Dear all,

I was interested in working with

  • GAN, StyleTransfer
  • Pytorch3D and

Any help or people who can point me in the right direction it would be much appreciated.

1 Like

I (with a large amount of help from @lgvaz) have style transfer implemented in v2 here for you to start with :slight_smile: :

While I’m at it I guess I’ll also mention that repo too (yes I have many many projects). If you were wondering how to implement something, I’ve probably done it if not something extremely similar in this repository to help you get started!: it’s a collection of notebooks from my study group, but in it includes Pose detection (via keypoints), style transfer, efficentnet, Audio, and many other bits not done in the first course :slight_smile:

  • PS: Don’t feel discouraged if your idea was there, please still do it! We all learn differently :slight_smile:

I couldn’t recommend more that you all checkout @muellerzr repo and course, absolutely fantastic, it really helped me get familiar with fastai2 :blush:

I’m building a library using nbdev to facilitate the use of fast style transfer (and anything related to feature loss really) take a look here. I tried to build as modular as I could, it was a surprise to me that at the end I could change the task of stylizing images to putting hats on cats with very few lines of code :sweat_smile:


Awesome guys @lgvaz @muellerzr. I will keep you posted, thank you so much.

1 Like

I decided to go with the plant patology competition on Kaggle, which is part of a CVPR workshop on fine-grained visual categorization:

The data is quite small, so the competition seems very beginner-friendly.
This is the starter notebook with fastai v2, which uses almost nothing except what was in Lecture 1, and it still gives quite an impressive result very fast:


If you go for DeepGBM, let me know :slight_smile:

Full transparency: going for that now :wink:

1 Like

My plan of attack (open to potential collaboration should I be able to squeeze things in ):

  1. Update my “Finding DataBlock Nirvana” article to v2
  2. HuggingFace Abstract Summarization integration into v2 (from DataBlock to Predictions)
  3. HuggingFace NER integration into v2 (from DataBlock to Predictions)

I’ve already done the last 2 for v1 and have just figured out the DataBlock bits for #2 above (I think).

At some point I’d to work on a full hugging face integration packing that incorporates all the above and all the other HF bits into something like fastai.huggingface with various sub-packages for using their schedulers, tokenizers, models, etc…


I’m considering adding support for Fastai v2 to during the course. Concretely, that would allow you to:

  • easily deploy your vision and NLP models
  • make predictions via web app, iOS(/Android) app, Python SDK, or API
  • share your model with others so they can use it (invite by email).

Looks like @piotr.czapla is working on a port already:

Maybe not worth pursuing then?

1 Like

Always worth pursuing, even if you have a guide :wink: BTW that is fastai1 (and the original paper’s code IIRC)

There is a DEVISE implementation with an older fastai version: