Share your work here ✅

Abnormal robotic arm!

A small utility for the lazy practitioners!


I’ve made a web app to identify the species of a tree based on a photo of its bark (source code)

It’s currently trained on very small dataset I collected from eight trees in the local park. As such it only knows about London plane, Sweet chestnut, European oak and Field maple and its accuracy is ~70% (see update below).

Importantly, it also uploads the submitted photo to AWS S3 then asks you if the classification was correct. Based on your feedback it labels the uploaded image which means the more people use it, the better it will get. It doesn’t yet retrain itself automatically though.

A recent paper, Tree Species Identification from Bark Images Using Convolutional Neural Networks got 94% accuracy over 20 different tree species using a much larger data set of >20k images. Their 30GB corpus is available to download and I plan to pretrain a network on it before fine tuning for my smaller data set.

UPDATE: after collecting photos of more trees and training at multiple resolutions, I’ve got accuracy up to 93% and expanded the set of tree species to six.


I’ve trained a CNN Regression model to count the number of people in a crowd using the UCF-QNRF dataset.

Even though the model is underfit, it’s within a factor of 3 and often much better than that. This is pretty useful, it’s better than I can guess.

This graph shows the ratio of prediction:actual vs actual counts.


See the notebook for details.


Hi there! I wrote a short post on using the data block API. Any feedback, corrections, or suggestions greatly appreciated :smiley:


I used the CelebA dataset from Kaggle for a regression model.

list_landmarks_align_celeba.csv in the dataset contains image landmarks and their respective coordinates. There are 5 landmarks: left eye, right eye, nose, left mouth, right mouth

This notebook attempts to make a learner to predict those five points. I used a 50,000 subset of the 200,000 images in this dataset and got good results:


I have extended the last week’s (lesson 3) planet notebook to add support on creating a submission.csv file where we can upload to kaggle for grading.

Here’s how I did it.


I also used the same CelebA dataset for a multi-class classification in this notebook.

This was the result for a picture of me!


This is awesome! :slight_smile:

1 Like

The data block API is beyond amazing :slight_smile:

I have transitioned the Quickdraw starter pack to now use the dataset API. There were models I wanted to train where I would need to hack together custom Datasets and potentially custom Dataloaders - the dataset API makes the headaches go away :slight_smile:

As for the starter pack, this time I ironed out a couple of the rough edges of the earlier version. I also now generate the drawings on the fly so experimentation should be much easier now.

The only annoying thing about this dataset is how long training takes with size 256x256 - but maybe there is a way to get equally good results with smaller sizes?! :wink:


Transfer-Learning - Image Classification

Transfer-Learning - Image Classification

Please check out my work on Optimizing Image-Classification using Transfer-Learning! This is an image classifier of 4 different types of Arctic Dogs! This medium blog tells you step by step of how I finally bring down the error rate at the end after a few tips and tricks from our previous lesson 3 lecture.


Awesome to see an example using categorical embedding w/ a tabular dataset ; )

I think your final line is my favorite in the whole starter pack. I had never thought of putting this directly into the notebook and it’s amazing:

!kaggle competitions submit -c quickdraw-doodle-recognition -f subs/{name}.csv.gz -m "{name}"

I like the idea that you are incorporating users feedback into your next training iteration. However, you do want to put manual inspection in between because feedback is not always right :slightly_smiling_face:

1 Like

Talking about collaborative filtering, I’ve created a small post when was watching the previous version of the course. The code from the post is written in PyTorch but probably could be interesting for someone who wants to dig deeper into the topic.

One of the gists from the post that shows writing of a small custom nn.Module with embeddings:

Update 1: Not sure why the link to Medium is not rendered properly, here is a plain address:

Otherwise, you probably could find it via @iliazaitsev username on Medium.

Update 2: Ok, Medium support responded that my account was blocked automatically by their spam filter. Probably they need try some Deep Learning methods to reduce the number of false positives :smile:


looks like that medium link is broken tho :frowning:

1 Like

Great project! It’s good to finally get to see your work after you talked about the idea in our previous meetings.

Are you pointing to this study group run by Assoc. Prof Kan Min-Yen?

I am looking forward to your detailed blog post. Thanks.

1 Like

Hm, thank you for letting know! Not sure why but Medium shows it suspended :frowning:

@ttgm Would you mind sharing your notebook please?

@bholmer interesting work! How did you separate out and visualize the different areas of the painting that appear to belong to different artists?