One thing that’s really helpful is that if you find documentation that doesn’t explain what you need to know, or find something doesn’t work the way the docs say it’ll work, is to create an issue in GitHub Issues for the project that describes the problem, and suggests a solution. (If you have the time and inclination, you can even send in a Pull Request which implements that solution!)
That way, you know that the next folks that come along will have a slightly easier time, thanks to your contribution.
@joedockrill I noticed that you have voila running on Heroku. I’d love to have docs on how to do that. Would you or anyone else be interested in contributing a PR for that? Here’s an example PR that provides that info for another service, FYI:
I made an image classifier for cheetahs, leopards and jaguars. Cheetahs are easily recognizable, but I find it quite hard sometimes to distinguish a leopard from a jaguar!
The model is using a ResNet50 architecture, it reaches 92% accuracy, and I didn’t use any hyperparameter tuning for the moment.
I’ve deployed it on seeme.ai via the API and it was suprisingly simple.
Side note: I wonder if the model uses the background of the images to differentiate jaguars from leopards, since they do not live in the same parts of the world - the background should often be greener and more tropical for jaguars, and drier for leopard pictures. I’ll look into that next!
Based on the visualisations of gradient descent in Lecture 3 (and Lecture 2 of version 3), I have been investigating gradient descent and producing a bunch of visuals of my own. Some of the visuals are funky and emotive and almost lifelike!
Part 1. Gradient descent with simple models on simple data
Part 2. Gradient descent with neural networks models on simple data
Though (I think) I understand the theory of gradient descent and of vanilla neural networks, it is evident that the whole is greater than the sum of its parts. I under-estimated what I could have learnt by experimenting.
somebody asked a couple of weeks ago if I could turn my duckduckgo image scraper notebook into an installable library, and i’ve been meaning to play with nbdev anyway (which is awesome btw, thanks to jeremy and sylvain), so that’s what I did.
I made an app to classify images of sea fishes. For the project I ran a 34 resnet on a database that has 44,134 images classified in 44 fish categories (for the moment, as I manage to increase my database I will increase the fish species)
The project was deployed with mybinder. The difference with the app shown in session 3 is that instead of using a jupyter notebook and voila, I used streamlite. The link to the project can be found here and the github repo can be found here.
I just published my project on TowardsDataScinece. I used fastai v2 and experimented with Chest X-ray dataset on kaggle and I got 100 percent accuracy on the provided validation set (val set for this dataset is rather small but its what others used on Kaggle) which compared to other notebooks on Kaggle is the best result on this dataset and higher than others. Click here to see the post.
I would be really happy if you let me know your comments about it
wow. it took a lot longer to deploy this, than to train it.
First, I tried to deploy on Binder, but it seems that it does not work. I checked some of the ones that should work, but none does anymore. It seems broken.
However, behold the “What game are you playing” classifier:
a little classifier, that can detect what game you play, from a screenshot.
on Heroku the slug size is 935MB with the requirements from the course page and only the bear classifier example with a .pkl file of 46MB.
Hi ringoo hope your having a fun day!
Well done for persevering, like you, I think the most difficult bit about creating any classifier model, is deploying it online easily at little or no cost, especially if one is still waiting to make their millions from it (pity the link makes you have to log in to see the model ).
@joedockrill Thank you for your comment. Yes, I used the instructions. The pytorch version in the Procfile is definitely the cpu version. I just copied it from the instructions.
It seems that I included the wrong repository on heroku and it used the myBinder requirements. Now that I have repeated all the steps, it seems to work.
I have created a separate repository for the heroku deployment: https://github.com/mesw/whatgame3.git, if you want to have a look at the files. Now the slug is only 368MB. Nice!
If you would like to take this little experiment for a spin, you can find it here:
too bad, there is a voila error (the same as with myBinder) and it does not work.
Guess the Doodle App
I’ve created this App based on the knowledge from Lesson 1 &2, completely in Jupyter Notebooks giving some Material UI touch to the App.
Got inspired by Google Draw and built one of my own using FastAIV2.
Currently supports 5 Doodles (bird,cup,dog,face,fish).
Please do give it a try.
Hi everyone, I released my article on end to end image classification last week.
I have tried to cover the entire spectrum, right from gathering data, cleaning it, training a model to creating an app, by taking an example of a Guitar Classifier which discriminates between Acoustic, Classical and Electric Guitars.
Let me know what you think about it