OK I’ve created a GitHub wiki: https://github.com/fastai/docs/wiki . You can clone or fork it from https://github.com/fastai/docs.wiki.git and contribute a PR in the usual way.
This is an experiment. I don’t know if this will end up better or worse than the existing mediawiki approach. I’m hoping that the benefits of GH might be:
We can easily add contributors, but if any turn out to cause problems, we can easily undo their commits
We can edit on our own computers in our preferred editors for longer work, or edit directly in GH for quick changes
We can use the markdown files in the wiki to create official docs pages, by simply using a static site generator
I’m not sure the best way to add contributors - one approach is simply to see who is providing regular useful pull requests and give those people direct access. I’ll seed it for now with a few contributors I’ll select from here, so if you’ve been a somewhat regular forum contributor or have written a post that I’ve featured in class, and are willing to contribute to this documentation project, please PM me your GH username. If I add you as a contributor, you’ll get an invite email from GH letting you know (and if you don’t get that invite, please don’t be offended - I’ll start small and we’ll add more contributors over time as more PRs come in )
We can then think about how to structure and project manage this…
I just wrote a blog post about how GPUs help with deep learning. The post also includes parts of Jeremy’s lesson 3 lecture. Could you guys please go through it and review?
My goal was to tell people that Deep Learning is easier than they might expect. This goal I believe I achieved and everyone now knows about imagenet, pre-trained models, fine tuning FC layers or some last convolutional layers.
Another goal was to explain basic components of CNN: filters, stride, padding; gradient descent, model architectures and whats going on in the model after Layer 1. I think thats just too hard to understand how all these elements got combined together from 45 minutes talk. I am sure guys have heard about elements, but only few of them, who studied something similar at the university, had this sight: “Oh, this is how it works!”.
Not a blog post per se (though one coming soon! ) but this thread on Twitter on debugging in jupyter notebook seems to have garnered some interest. Posting it here in case it might be useful to someone who still has not started using Twitter.
Me and @groverpr have written a series of blogs on collaborative filtering, embeddings and different algorithms to implement collaborative filtering. Most of the content is inspired by Lecture 5 and 6 but there are some new ideas too. We thought that it would be better to get it reviewed here before publishing. We’ll really appreciate if you can have a look and provide feedback. Thanks
I stumbled upon AutoML by Google just yesterday and found it fascinating. Here is the link in case someone else also missed it. This project was released a few months back.
I just published a Reinforcement Learning comic describing the intuition behind Advantage Actor Critic (A2C) models. I wanted to apply our newfound PyTorch knowledge to an RL problem and this is the result…
It’s a little bit different than other tutorials out there. I’d love to get your feedback on how to make it better. Please chime in!
Here is my attempt to A sweet introduction to RL, I will try to make a series explaining all major RL algorithms using the same bakery example. (Please don’t compare it with the comic, you won’t like it then xD)
Here’s my first attempt at writing a blog. I’ve summarized using augmentation transforms (as we’ve seen in Lesson 1) to improve the model’s performance.
Any feedback/suggestions before I publish it will be helpful!
It’s @negamuhia. I already had it there, but for some reason I only got credited when one person tweeted it. I don’t think anyone’s tweeted it through Medium though.