Share your work here ✅

Great, I’ll follow this closely since I’m a few courses behind you.


Have you seen this: or this ?

1 Like

Hey guys, working on getting all of my articles written I need to, but here is one I’m most proud of! For a competition at my university, I utilized ULM-FiT to help people navigate trustworthy articles, which won us first place! I know that the metric could be improved greatly, I have not had the time to revisit it with fresh ideas, but I plan to do so after the summer. Let me know your thoughts!


:smiley: Let me dive into these !

Loving the course, thanks so much to the fastai team!

After the first lesson I made a small dataset for the game Age of Empires 2 to do some experiments, wrote my first blogpost here:

1 Like

Hey everybody!
Just published an article about my work in the last couple of months.
We extended’s implementation of ULMFiT to take additional categorical and continuous metadata into account. This would not have been possible without the awesome work of @quan.tran and @joshfp !


Great post Andreas! I’m glad my work was useful to you and thanks for the acknowledgment!

I took a bit of a detour after lesson 4 in an effort to gather Twitter data for an NLP project relating to bitcoin. While this isn’t technically related to the course, I think a lot of the data scraping techniques can be used for a variety of domains. As a current CS student at Penn, a lot of my CS work is very theory oriented so I forgot how much I love to code and work on exciting projects from the ground up. Hope you enjoy the article and message me if you’d be interested getting access to the database. Also, if you’d like tweets relating to a different keyword and didn’t want to recreate the project I’d be happy to scrape them for you for the cost of the servers (~$1 per 10 million tweets).


Hey all,
Made a little app referring to fastai bear classifier, mine is pigeon species classifier wherein the trained model identifies the species of the pigeon analyzing the picture. Error rate is too high (30-33%), training sample was too less, still working on gathering more data and covering maximum species. Taking a look at the data, clearly there are species looking too similar to each other, just different names, hope more data will increase the accuracy…none the less…got to learn a good deal building the app. Please take a look here …excuse the typo.


We now have a way to evaluate the model on a new dataset.
@quan.tran implemented single data point prediction.

Have a look at predict_one_item here


hi all,
I started course recently and I love it. Since I’m an iOS developer and I develop mainly in Swift I decided to do some extra work. Here is gradient descent (taken from Deep Learning 2019 Lesson 2), implemented in Swift for TensorFlow for simple linear regression
Any feedback is welcome, thanks!

1 Like

Hi, if I remember correctly the problem is not with evaluating one row of data (if I understood your code well and that’s what predict_one_item do :slight_smile: ), we can do it with, for ex. learner.predict() method, but it works very slowly if we try to predict with it one by one a set (1000+ records) of new data. That what I was trying to achieve in my functions (I needed this for implementing partial dependency, feature importance and other similar things).
Now there is a way to predict batch with .get_preds() and some hackery with substituting test set with new data. But I had some misconsistencies when I was trying this approach.

That’s a really nice project, and thanks for the acknowledgement! I don’t know if you tried this before but training different models (xgboost, NN) and stacking them can also help boost the performance, but I guess it would complicate the deployment process…

1 Like

I was also thinking of assigning new data as test set + call get_preds() for evaluating new dataset, but you already investigated this. I sometimes got inconsistent result from pred_batch and get_preds too. I will take a look at what you have in the other thread for some experiments.

1 Like

Nice! In Part 2 of the course, most of the fastai library is re-implemented in S4TF. Luckily, you won’t have to worry about learning Swift! :wink:

1 Like

I trained a classifier to discriminate between Chanterelle mushrooms and Jack-o-Lantern mushrooms to 85% accuracy.

Chanterelles are delicious. Jack-o-Lanterns are poisonous!

Great course, thank you for all of it :slight_smile:

1 Like

@quan.tran @joshfp FYI the article just got featured in Towards Data Science:


Using Google Images, I created a dataset of handguns, namely glock, revolver and desert eagle. I trained a classifier and it got 96% accuracy!
Here is the link:
Handgun classification

Excellent job, I have been thinking of doing a regression example with a Dataset I could understand easily, it’s a perfect example.

Many Thanks mrfabulous1 :smiley::smiley:

I’ve taken Lesson 3 CamVID image segmentation, and Planet’s image classification lesson, to create an image segmentation to detect building footprints from satellite images.

There is a lot of training data (tens of Gb of high-resolution images) from the Spacenet competition, but mostly I wanted to do one project on my own, so I took ~6k chips of images in Rio de Janeiro.

The hardest part was to convert the .geojson footprints into images (I used parallel rio rasterize), and then fiddle to avoid running out of memory. Overall I’m quite happy, here is the notebook, and the results:

left ground truth, right prediction.