Share your work here ✅

My guess is that part 2 will provide a lot of these answers :slightly_smiling_face: I have not touched optimizers apart from maybe changing hyperparams here and there since I started doing the courses, not sure there is any need for that.

For me, what I want, are just simple things - reading in the data how I would like to (images, tabular data, you name it), creating custom architectures and using whatever I want for labels, I think that takes me 99% along the way on my wish list :smile:

I think maybe understanding pytorch a little bit better helps to raise the curtain a little bit on what is happening… not sure.

Despite what I wrote above (maybe I was not very clear there) I no longer feel the need to understand everything… which is a very nice side effect of taking the fastai courses :slight_smile: I just want simple building blocks I can hook up together, train a model here and there, and move on with my life :slight_smile:

So to answer your question more fully - I think both reading custom data and modifying architectures is coming in part 2 :slight_smile: And if you can’t wait there is a lot of that in part 2 v2 and in the machine learning course (but this uses fastai v0.7 I believe).

I would venture a guess part 2 + practice would be a short and correct answer to your question :slight_smile:


thanks @radek

It seems to me that everything you would want to tweak is tweakable in I recently participated in my first Kaggle competition, and for most of it, I definitely felt out of my depth.

But what I did do was try to find out what other data scientists were doing and figure out how I was going to accomplish the same thing with, and Kaggle is a great place to learn because many people (like @radek) are willing to share their process.

If you haven’t already gone through it, course-v3 (specifically Lesson 7) should be enough to get you to the point of building your own architectures. I downloaded the source so I could go through it in my code editor and find out how the library does what it does. It’s really helping me become intimate with the library and better understand applied concepts.

Participating earnestly in a Kaggle competition and going through course-v3 allowed me to implement the neural network below from scratch, and I’m only getting started.


great work!
And thanks for your suggestion of replicating other kaggle kernels using
i would try this on fashion-mnist to see how this works out…

1 Like

Built a project on finding similar images using the last layer of CNN and Locality-sensitive hashing for approximate nearest neighbors using Caltech 101 dataset. Here are some results(Upper left image is used to query other images)

Here is the link to the notebook.


Hello folks,
I attended fastai part 1(2019) Oct-Dec last year and I did a project Deep Visual Semantic Embedding Models for Mobile inspired from using modified fastai 1.0.33.

My project involved using semantic information from word embeddings to augment visual deep learning models to correctly identify a large number of image classes including previously unseen classes.I used this work towards a course project at Stanford (AI Graduate Certificate) too.

Pets lessons (1 & 6) were super helpful in building the model and interpreting the results:

Stanford project report:

I presented this work at an internal Machine Learning conference at Adobe too and it was received very well. Thank you @jeremy and team for the amazing library and super useful docs.

Project Details
I used a lightweight mobile architecture SqueezeNet 1.1 to train a model with pretrained fastText word vectors to learn semantic relationships between image labels and map images into a rich semantic embedding space. The model takes in image input and provides a 300-D image feature vector output. Using efficient cross-platform similarity search library such as nmslib, the output feature vector can be used for image similarity search in model predictions, or for label prediction based on lookup of the nearest fastText word vector representation for the known image labels in dataset. The model can also be generalized for zero-shot use cases by performing nearest neighbor search in model predictions for the fastText word vector representation of the input text label.

This project observed that such visual-semantic models are able to perform image-to-image, image-to-text and text-to-image associations with reasonable accuracy while using less than 7% of disk space and training parameters as compared to bigger models such as Resnet34 (used as baseline).


I have collected 100 photos od Pikachu, Bulbasaur and Charmander each. And trained the model with 90 % accuracy and was able to classify the 3 starter pokemons from each other :smiley: :smile: feels great B-) Will make something better with time :smiley:

Hi @radek

Really appreciate your repo, your work really guide me along my study.

I started to learn deep learning on Dec 2018 (fastai part 1 on Jan). I also joined Humpback competition as practice, then I noticed resnet-50 for 5005 classes is far from ideal. Somehow I found your repo along the way, and it kind point me direction on the computer vision part. To understand what you have done, I dig inside fastai docs, to figure out how callback is working, how to get customized MAP5. Also, how to write customized ItemList (want to be able to show paired Siamese data), and how hook is working… and even more… (especially why ni*2 for the adaptiveconcatpooling :slight_smile:

To just understand your repo, I got my first PR to fastai, got my first kernel medal on kaggle…
I am currently still working on the Siamese network and get a Triplets dataset (anchor/positive/negative), but I start to understand the details, to solve multi-class problem (1000+ classes) (you can’t just use a pretrained resnet and hope it is working, that’s why you have bounding box, one-shot learning coming in)

Still got a lot to learn, but thank you very much!!! :+1:


I just finished Part 1, Lesson 2 and deployed a simple web app. It takes a link to an image of art and attempts to identify the artist. I’ve only trained it on a few artists right now so it’s a bit limited in its scope but was informative to work on nonetheless! Link is below.

It has been a while since I wrote a blog post, but here it is :slight_smile:

Besides some thoughts on testing, it gives an overview of how a machine learning project could be structured to improve the chances of arriving at a working solution. In some sense, this is a follow up on a post I wrote during taking the v2 version of the course:

In some sense, this is probably my favorite part of the post, one of my very first attempts to speak to a subject I care very deeply about

If youtube is pushing a change to its recommendation algorithm that will drive engagement (and ad revenue) but will do so by promoting conspiracy videos, should it test the impact on society? Yes. There is no absolution through the fact of not having considered something. The more powerful algorithms become in shaping our reality, the more responsibility we as the authors need to take. You cannot be driven by profit and say you are agnostic to the impact that you are having. That is the definition of an externality. Whether a corporation destroys an ecosystem by dumping waste into a river, or poisons people’s minds through promoting malicious content, the end result is the same.


so after lesson 2 I created a neural network that can differentiate buildings of following architectures:
Ancient Greek, Gaudi, Gothic, Islamic, Modern, Renaissance and Traditional Chinese
with an accuracy of about 95%. There used to be 17 classes but a lot of them were very similar like Traditional Chinese and Japanese or Postmodern and Modern so it only had an accuracy of about 45%, so I decided I would trim them down a little bit.

Hey fast folks!

So all you know already about the datasets @jeremy released a few days back(imagenette and imagewoof). I trained both of them on the same network. Below I am summarizing the results I got so far. Will be experimenting on this for sometime, after the revelation I had after seeing the results in Imagewoof.

Both the experiments have been done under 160px resolution and 40 epochs as suggested by Jeremy.

Imagenette results :

Training Accuracy: 71.12
Training Loss: 0.8912

Validation Accuracy: 70.39%
Validation Loss: 0.8913

Now, for Imagewoof. drum rolls…

Imagewoof results :

Training Accuracy: 54.02%
Training Loss: 1.2670

Validation Accuracy: **30.99%**
Validation Loss: 2.0365

Thank you @jeremy for providing this awesome task to work on. So much to improve and learn from this! Time to implement the tricks you taught us in the courses. ** evil laugh **

All of my further work will be kept here.
Github Repo:

Walk RNN - LSTM over graph
We recently conducted a small R&D project using a language model for graph classification and compared accuracy to a recent paper which uses a CNN. I wrote a short blog yesterday about our results which has a link to our repo in github, if interested. In short, we passed in random walks over graph, enriched with structural and graph property information, to an RNN and trained a classifier with the purpose of labeling small graphs. (Apologies - I posted this yesterday, but this seems a more fitting location) Thank you Jeremy and the community for!


This may help you: Using AUC as metric in fastai.


Hi everyone , so I went for a Spectogram Classification ‘Problem’ , i made spectograms of 8 different types of whales sounds and fit trained a ResNet50 on them but this is what I get , i think it is overfitting , can somebody give me a cofirmation about it , thanks.


I’m trying to teach a CNN how to add and subtract… Is it possible? To keep this thread clean, I’ll post more details here:

After Lesson 6, I attempted two of the homework exercises that Jeremy suggested:

Creating my own dropout layer:

  • For the Lesson 5 homework, I had created my own Linear class. So I took that class as a starting point, and updated it to add dropout functionality
  • To do this, I edited the “forward” function to have an option for dropout (see screenshot below). This made my model run really slowly, so my code was obviously inefficient and not how you would do it in practice! But building it was still helpful for making sure I understood dropout properly
  • I also tried the Bernoulli trial function that Jeremy showed, where you multiply activation values by 1 / (1/p) whenever you do not change an activation value to zero. But I got stuck with an error and did not end up implementing it it (the error was “one of the variables needed for gradient computation has been modified by an inplace operation”). Since I did not implement this part, if I was to use this model at test time (instead of just for training), I would need to multiply the model by (1-p) to adjust appropriately
  • Full code available at GitHub

Creating a mini-batch without using data.one_item:

  • Per Jeremy’s suggestion, I decided to try creating a manual mini-batch as input for our pets heatmap
  • This required a lot of different steps (e.g. cropping my image, resizing it, normalizing it, converting it from a numpy array to a PyTorch tensor), so it took me a little while to figure it all out. It was definitely helpful for understanding exactly what happens when we normalize an image
  • The heatmap seems to show correctly, but when I show it I also get an error message (“Clipping input data to the valid range for imshow with RGB data ([0…1] for floats or [0…255] for integers”). Does anyone know what this error means, and what I should have done differently?
  • Full code available at GitHub
1 Like

Have a look at the data type and range (min & max) you are handing over to the plot function.
From the code I would guess avg_acts is a float value so normalizing it to lay between 0…1 should do it (if it is an integer the values should be between 0…255).

1 Like

Certainly not - it’s classifying your validation set perfectly! :slight_smile:


Thank U Jeremy , it’s the reason why i asked the question :sweat_smile: , in the sense that it was too perfect to be true .
thank’s for the confirmation.