Thread for Blogs (Just created one for ResNet)

OK I’ve created a GitHub wiki: https://github.com/fastai/docs/wiki . You can clone or fork it from https://github.com/fastai/docs.wiki.git and contribute a PR in the usual way.

This is an experiment. I don’t know if this will end up better or worse than the existing mediawiki approach. I’m hoping that the benefits of GH might be:

  • We can easily add contributors, but if any turn out to cause problems, we can easily undo their commits
  • We can edit on our own computers in our preferred editors for longer work, or edit directly in GH for quick changes
  • We can use the markdown files in the wiki to create official docs pages, by simply using a static site generator

I’m not sure the best way to add contributors - one approach is simply to see who is providing regular useful pull requests and give those people direct access. I’ll seed it for now with a few contributors I’ll select from here, so if you’ve been a somewhat regular forum contributor or have written a post that I’ve featured in class, and are willing to contribute to this documentation project, please PM me your GH username. If I add you as a contributor, you’ll get an invite email from GH letting you know (and if you don’t get that invite, please don’t be offended - I’ll start small and we’ll add more contributors over time as more PRs come in :slight_smile: )

We can then think about how to structure and project manage this…

6 Likes

Hi fellow,

I just wrote a blog post about how GPUs help with deep learning. The post also includes parts of Jeremy’s lesson 3 lecture. Could you guys please go through it and review?

Thanks :slight_smile:

2 Likes

I gave today 45 minutes presentation on CNN at my company. Some slides: image recognition.pdf (2.7 MB)

8 Likes

Cool! How did it go?

My goal was to tell people that Deep Learning is easier than they might expect. This goal I believe I achieved and everyone now knows about imagenet, pre-trained models, fine tuning FC layers or some last convolutional layers.

Another goal was to explain basic components of CNN: filters, stride, padding; gradient descent, model architectures and whats going on in the model after Layer 1. I think thats just too hard to understand how all these elements got combined together from 45 minutes talk. I am sure guys have heard about elements, but only few of them, who studied something similar at the university, had this sight: “Oh, this is how it works!”.

But I think its ok for the first try.

10 Likes

Not a blog post per se (though one coming soon! :slight_smile: ) but this thread on Twitter on debugging in jupyter notebook seems to have garnered some interest. Posting it here in case it might be useful to someone who still has not started using Twitter.

8 Likes

The holiday season allowed me to post my post review on Kaggle’s Porto Seguro’s Safe Driver Competition.

Linkedin:
www.linkedin.com/pulse/predicting-auto-insurance-claims-deep-learning-james-dietle

Medium:

Website:

4 Likes

Hi guys,

Me and @groverpr have written a series of blogs on collaborative filtering, embeddings and different algorithms to implement collaborative filtering. Most of the content is inspired by Lecture 5 and 6 but there are some new ideas too. We thought that it would be better to get it reviewed here before publishing. We’ll really appreciate if you can have a look and provide feedback. Thanks :slight_smile:



10 Likes

I stumbled upon AutoML by Google just yesterday and found it fascinating. Here is the link in case someone else also missed it. This project was released a few months back.

1 Like

I’ve published a blog post outlining a few of the lessons learned from this class.

5 Likes

Hi Everyone,

I just published a Reinforcement Learning comic describing the intuition behind Advantage Actor Critic (A2C) models. I wanted to apply our newfound PyTorch knowledge to an RL problem and this is the result…

It’s a little bit different than other tutorials out there. I’d love to get your feedback on how to make it better. Please chime in!

Lovely artwork by @katherine

Happy New Year!

8 Likes

Wow that’s an amazingly great comic! :smiley:

What’s your twitter handle? (BTW you should add it to your medium profile so you are automatically credited when people tweet your article).

a marvel! we need more tutorials/posts like this!

1 Like

@rudy The comic is really cool! Please give the community more such cool posts :smiley:

1 Like

Here is my attempt to A sweet introduction to RL, I will try to make a series explaining all major RL algorithms using the same bakery example. (Please don’t compare it with the comic, you won’t like it then xD)

1 Like

Here’s my first attempt at writing a blog. I’ve summarized using augmentation transforms (as we’ve seen in Lesson 1) to improve the model’s performance.


Any feedback/suggestions before I publish it will be helpful!

1 Like

It’s @negamuhia. I already had it there, but for some reason I only got credited when one person tweeted it. I don’t think anyone’s tweeted it through Medium though.

Thanks @jeremy, appreciate the support for the RL comic! Especially since I know you’re not overly swept away with RL-mania like some of us :slight_smile:

Twitter handle is rgilman33, I’ve updated it on Medium as well–thanks for the advice! (if it was for me)

Added a shout-out to fast.ai on the comic, not sure what fast.ai url to link to. Please let me know if you have a preference.

2 Likes

thanks @helena and @init_27, appreciate the support!

2 Likes