Thread for Blogs (Just created one for ResNet)

(Jeremy Howard) #142

OK I’ve created a GitHub wiki: . You can clone or fork it from and contribute a PR in the usual way.

This is an experiment. I don’t know if this will end up better or worse than the existing mediawiki approach. I’m hoping that the benefits of GH might be:

  • We can easily add contributors, but if any turn out to cause problems, we can easily undo their commits
  • We can edit on our own computers in our preferred editors for longer work, or edit directly in GH for quick changes
  • We can use the markdown files in the wiki to create official docs pages, by simply using a static site generator

I’m not sure the best way to add contributors - one approach is simply to see who is providing regular useful pull requests and give those people direct access. I’ll seed it for now with a few contributors I’ll select from here, so if you’ve been a somewhat regular forum contributor or have written a post that I’ve featured in class, and are willing to contribute to this documentation project, please PM me your GH username. If I add you as a contributor, you’ll get an invite email from GH letting you know (and if you don’t get that invite, please don’t be offended - I’ll start small and we’ll add more contributors over time as more PRs come in :slight_smile: )

We can then think about how to structure and project manage this…

(Shubham Singh Tomar) #143

Hi fellow,

I just wrote a blog post about how GPUs help with deep learning. The post also includes parts of Jeremy’s lesson 3 lecture. Could you guys please go through it and review?

Thanks :slight_smile:

(sergii makarevych) #144

I gave today 45 minutes presentation on CNN at my company. Some slides: image recognition.pdf (2.7 MB)

(Jeremy Howard) #145

Cool! How did it go?

(sergii makarevych) #146

My goal was to tell people that Deep Learning is easier than they might expect. This goal I believe I achieved and everyone now knows about imagenet, pre-trained models, fine tuning FC layers or some last convolutional layers.

Another goal was to explain basic components of CNN: filters, stride, padding; gradient descent, model architectures and whats going on in the model after Layer 1. I think thats just too hard to understand how all these elements got combined together from 45 minutes talk. I am sure guys have heard about elements, but only few of them, who studied something similar at the university, had this sight: “Oh, this is how it works!”.

But I think its ok for the first try.


Not a blog post per se (though one coming soon! :slight_smile: ) but this thread on Twitter on debugging in jupyter notebook seems to have garnered some interest. Posting it here in case it might be useful to someone who still has not started using Twitter.

(James Dietle) #148

The holiday season allowed me to post my post review on Kaggle’s Porto Seguro’s Safe Driver Competition.




(Shikhar Gupta) #149

Hi guys,

Me and @groverpr have written a series of blogs on collaborative filtering, embeddings and different algorithms to implement collaborative filtering. Most of the content is inspired by Lecture 5 and 6 but there are some new ideas too. We thought that it would be better to get it reviewed here before publishing. We’ll really appreciate if you can have a look and provide feedback. Thanks :slight_smile:

(Prince Grover) #150

I stumbled upon AutoML by Google just yesterday and found it fascinating. Here is the link in case someone else also missed it. This project was released a few months back.

(Brian Muhia) #151

I’ve published a blog post outlining a few of the lessons learned from this class.

(Rudy Gilman) #152

Hi Everyone,

I just published a Reinforcement Learning comic describing the intuition behind Advantage Actor Critic (A2C) models. I wanted to apply our newfound PyTorch knowledge to an RL problem and this is the result…

It’s a little bit different than other tutorials out there. I’d love to get your feedback on how to make it better. Please chime in!

Lovely artwork by @katherine

Happy New Year!

(Jeremy Howard) #153

Wow that’s an amazingly great comic! :smiley:

(Jeremy Howard) #154

What’s your twitter handle? (BTW you should add it to your medium profile so you are automatically credited when people tweet your article).

(helena s) #155

a marvel! we need more tutorials/posts like this!

(Sanyam Bhutani) #156

@rudy The comic is really cool! Please give the community more such cool posts :smiley:

(Sanyam Bhutani) #157

Here is my attempt to A sweet introduction to RL, I will try to make a series explaining all major RL algorithms using the same bakery example. (Please don’t compare it with the comic, you won’t like it then xD)

(Neerja Doshi) #158

Here’s my first attempt at writing a blog. I’ve summarized using augmentation transforms (as we’ve seen in Lesson 1) to improve the model’s performance.

Any feedback/suggestions before I publish it will be helpful!

(Brian Muhia) #159

It’s @negamuhia. I already had it there, but for some reason I only got credited when one person tweeted it. I don’t think anyone’s tweeted it through Medium though.

(Rudy Gilman) #160

Thanks @jeremy, appreciate the support for the RL comic! Especially since I know you’re not overly swept away with RL-mania like some of us :slight_smile:

Twitter handle is rgilman33, I’ve updated it on Medium as well–thanks for the advice! (if it was for me)

Added a shout-out to on the comic, not sure what url to link to. Please let me know if you have a preference.

(Rudy Gilman) #161

thanks @helena and @init_27, appreciate the support!