Please check out my work on Optimizing Image-Classification using Transfer-Learning! This is an image classifier of 4 different types of Arctic Dogs! This medium blog tells you step by step of how I finally bring down the error rate at the end after a few tips and tricks from our previous lesson 3 lecture.
Talking about collaborative filtering, I’ve created a small post when was watching the previous version of the course. The code from the post is written in PyTorch but probably could be interesting for someone who wants to dig deeper into the topic.
One of the gists from the post that shows writing of a small custom nn.Module with embeddings:
Update 1: Not sure why the link to Medium is not rendered properly, here is a plain address:
The dataset was created using some software, or maybe a camera/device that gives out these key points. Whatever maybe, there must be a underlying mathematical model(fn) for that (camera/device/software). So, that’s what the neural network is trying to approximate, instead of the finding the actual key points. What I mean to say is, here the neural network is not trying to find exactly where is the mouth, eyes or nose; because we haven’t explicitly mentioned it in our dataset. Yoshua Bengio and team created this dataset, I would like to know if there was any intention of such sorts.
if that’s the case, even if we predict the actual facial keypoints for the test set, we can expect a higher error.
I’ve made lynx classifier (it classifies which lynx species is given lynx). https://which-lynx-is-it.now.sh/
Error is something 20%ish (lost the notebook, because of issue with gcp). Considering the fact that the dataset was noisy it think it’s good. Interesting thing is it really has problem classifying baby lynxes for some reason.
Hopefully I will have time this week to write a blog post about it.