Share your work here ✅

(Aditya Harit) #1215

I don’t think even a person can say for sure whether a book is technical or not, just by its cover. There are plenty of technical books, with a non-bland cover. You can obviously make a judgement from the title, but that is NLP.

Maybe don’t judge a book by it’s cover? :stuck_out_tongue:


(Anubhav Maity) #1216

That’s correct. :slight_smile: There are technical books that have the same design and color of cover page as of self help and novels.


(Surabhi Raje) #1217

Fixed the code link for the post!


(Anders Jürisoo) #1218

Got through exercise 0-3, that was great fun! Took the advice to also get hands dirty as fast as possible. Did this little web app that tries to classify your facebook avatar into one of 36 Oxford pets (Had to drop Sphynx since it was getting to much matches :smiley: )



(Anders Jürisoo) #1219

Wow, inspiring story :slight_smile: Thanks for sharing!

1 Like

(Jeff Hale) #1220

I made an app to classify skin cancer images and deployed it using Render. I wrote about how easy it was to make and deploy in this Medium article:



Hey there,

I want to share my MNIST equivalent with you :smiley:
This is my little image classification app that distinguished between the seven different plastics defined by the industry standard RIC (resin identification code). I built up my own dataset by taking photos almost during every shopping tour. By now my dataset contains of ~450 pictures of the seven different plastics.

This is my dataset on kaggle:
You are very welcome to contibute :partying_face: (no websearch pictures, please)

My goal was to built up a little website or app where you get information on whether the plastics are ready to recycle or not. What are alternatives? Which specific effects has the classified plastic on our health? What does it mean for the environment? Here are some pictures of my mock-up - you can test it also on render:
(As long as I still have credit on render :wink: )

Sadly the training of the model is not so good yet, I think this might be because of the lack in data.

Do you think this could be interesting to develop further? Is anyone interested in collaborating to make this open available as a service?


(Jona R) #1222

Very cool! I tried something similar, but I didn’t get nearly as high accuracy.
Do you have any thoughts about why the plot of image activations for the wrongly classified are in such weird places (e.g. the whitespace of the Javascript)?



This is great - thank you for sharing. I searched for an article like yours a while ago :wink:

1 Like


Hi, after following the first lesson at the start of the month, I’m excited to share with you all my first kernel ever!

I picked a fruit dataset on Kaggle and classified it. It’s not much, but I loved every single line of code I wrote (copied, mostly) :smiley:

1 Like


Me too, I generated my own version of MNIST :laughing: throwing in whatever fonts I found in my system ( Not just numbers but A, B, C, … and a, b, c, too. I use the same dataset from 3 learning sessions:

  1. Classifying alphanumerics 1, 2, 3, ..., 9 and A, B, C, ..., Z and a, b, c, ..., z.
  2. Classifying font type.
  3. Classifying font style.

Alphanumerics aside, I also tried face recognition and donkey-mule-horse recognition. Blessed be all donkeys, mules and horses (and all alphanumerics).

1 Like

(Anubhav Maity) #1226

Thanks for pointing out this. I rescaled the images to 352 and did the training again as done in the notebook “Lesson 6: pets revisited” to view the activations more clearly. After doing that also, the activations are still in places where we cannot find the relation of the activations to the predictions.

Am I missing anything? Can anyone in the forum help us with this?

github repo

1 Like


Hi All, First I thank Jeremy for this awesome course, I did some fun project based on lesson 1, its a vehicle classifier, I took images from Google and feed it to the fastai library to classify different type of vehicle (SUV, Formula1, Hypercar, pickup truck, batmobile, container truck, heavy duty truck, convertible) without change any default setting achieved 90%~ accuracy with resnet50. my next step will be adding more images to each class and add more transfort medium like a bike, bicycle, auto,bus etc., then integrate with live traffic cam to do analysis about the transport medium movement. again thank you so much for this awesome course.Screenshot_2 with resnet 50

looking for suggestion to add more things in this fun project. Thanks


(Derek) #1228

Hi all,

Just finished Lesson 1. Here’s my writeup on a fun little bear classifier I built using ImageNet data:

Big takeaway: image URLs from ImageNet aren’t always good. For the black/brown bear images, I found that I could use only about 400 images, or 15% of the data.

Despite this, I’m surprised by how good the results are. I probably need to test on something outside of the ImageNet dataset to be sure, but 2.5% error rate is pretty good!

I also found an ImageNet mis-classification. I have no clue how common that is, or how to report it. How do you normally deal with something like that? Thanks again for the great (free) course!

1 Like

(Dusten) #1229

Color Swatch Dataset

I am excited to refresh my knowledge of deep-learning with this years release of Part 1 and Part 2 (soon to be released).

While we all wait for Part 2 to be released, I went back and rewatched Part 2 from 2018.

In Lesson 11 there is a new idea that was introduced with DeVISE that can find things in the dataset that it may not have learned natively. The quote that kicked off this line of thought was “…I don’t know much about birds but everything else here is BROWN with WHITE spots, but that’s not…”

The comment about the color brown now has me thinking about object detection and can we ask if the model knows something about color, for example, “Red Car.”

Anyways here’s a link to the notebook and one to the dataset.

1 Like

(Diego Medina-Bernal) #1230

Hi everyone! Just wanted to share a quick and fun project I put together. Essentially I took everything I have learned from FastAI, found a very interesting White-paper on (link below) and gave it a shot to replicate their work!

Predicting price action movement for currency pairs with ~82% Accuracy

Research Paper:

Amazing work by: Yun-Cheng Tsai, Jun-Hao Chen, Jun-Jie Wang

Github repository:

I have set up the repository with different Jupyter Notebooks for:

  • Downloading data using Oanda API (Key has been destroyed)
  • Data pre-processing & Applying indicators
  • Converting Charts (sliding window approach)
  • Using FastAI, DenseNet Architecture

I just finished Part 1 of the new FastAI course so thank you so much @jeremy, @rachel, and to the team for such an AMAZING course. I cannot wait for Part 2 coming in the summer.

Hope this helps others!

PS. I’m learning more about Finance/Quant trading & am very new to Machine Learning so please don’t mind any mistakes in the notebooks. Still learning :slight_smile:


(Gaurav) #1231

I’ve created a python package inltk: Natural Language Toolkit for Indian Languages, available for download on pip.

It contains Language Models, Language Classifier and Tokenizers for 10 Indic Languages, namely Sanskrit, Hindi, Punjabi, Gujarati, Nepali, Kannada, Malyalam, Marathi, Bengali, Odia which I had trained using fastai.

Here’s a Demo.

I believe this toolkit will be helpful in developing apps which will reach and impact millions in their local language as we bring next billion users online.

Big Thanks to @jeremy and fastai team, for everything you do!


(javier berneche) #1232

Hi everyone!

I’m working on lesson 2 and decided to make a movie poster classifier, was not expecting much since the data from google was really noisy and movies usually have more than one category but I decided to give it a try.

output of learn.fit_one_cycle(8):

That seems like a disaster but checking the confusion matrix is a little bit more encouraging

It learned something and the most confused categories make a lot of sense.

What I found weird is that my learning rate plot after unfreezing just goes up

Anyone know what that means?


(Sven) #1233

this is a pet project I am involved in, not sure if this is the right place to post but it’s loosely inspired by me learning fastai, so I thought it might be interesting:

We felt it’s important to keep up to date with recent discussions in machine learning across the net, so I helped writing a site that collects this kind of content:

It can do some interesting queries, e.g. changes to SotA in the last month, sorted by “top”, meaning: first places first:

It also knows which arXiv papers have been written by which group, so you can e.g. see all papers discussed which were written by Google in the last 3months, ordered by date:

It uses a sentiment model to decide which twitter messages are related to machine learning and also tries to find the most significant phrase in a conclusion of an arXiv paper (Could it change the SotA? What are problems with this approach?) and displays it next to the paper’s title

1 Like

(Anubhav Maity) #1234

I have worked on news categorization of AG News dataset using the library. Got an accuracy of 93%. You can check out the GitHub repo here