Thread for Blogs (Just created one for ResNet)


(Jeremy Howard) #102

Wow this is just terrific, once again! You have a real knack for technical writing. I noticed a couple of little issues FYI:

The second way to go about it, and, in fact, the easiest to implement, is to approximate the derivative with the following formula we know from calculus:

I don’t see a formula here.

The most fast method for calculating would be analytically find the derivative for each neural network architecture.

This is what backprop does, right? Otherwise - I’m not sure what distinction you’re making. Backprop simply calculates the derivative using the chain rule. I think the more important thing to mention here is that some libraries like Pytorch can calculate the analytical derivative for arbitrary Python code, so you don’t have to worry about doing it yourself.

Some of them are described in my other post ‘Improving the way we work with learning rate’.

You should link to your post here. (In general, I think your article to use a few more hyperlinks, BTW.)

Finally, I spotted at least one spelling mistake, so perhaps run a spell-checker over it?


(Sravani Aluri) #103

Hi All,
Came up with a simple blog post on Embeddings. Attaching the draft here. Please review the same and let me know your thoughts. Would publish it after changes; if any.

Thanks in advance :slight_smile:


(Nikhil B ) #104

Here is a second blog that builds up on ‘fun with small datasets’ : Is the human wearing glasses or not?

I used a script to download 100’s of images from Google Image search. (135 x 2 training, 20 x 2 validation images). I was able to get 100% accuracy using augmentation and differential learning rates.
@jeremy I’ve tried to get this done before class, so let me know if there are some gaps to be addressed :slight_smile:


(Jeremy Howard) #105

Nice job with the ‘glasses detector’ - very nice example of using a little more data, and a little more training. Thanks so much for sharing :slight_smile:


(Ravi Sekar Vijayakumar) #106

In the same spirit, I wrote a blog on Spiderman Vs Deadpool here. Appreciate any feedback on this!
@nikhil I found that some images downloaded by the script were corrupt. I added code to remove these before creating csv.


#107

My past self can so relate to this:

Read the deep learning book before building anything

Thankfully Jeremy showed us the way! :smiley:


(Apil Tamang) #108

Just put up another easy to read blog on entity embeddings… I actually wrote this up from the earlier version of the course. Feels good to distill the writing and finally put it as a blog…


(Pramod) #109

The draft of my first medium post is here. Would love to hear your thoughts/comments! In either case, I thank each and everyone of you for inspiring me to overcome my fear of technical writing.

Update : the draft is published after having incorporated some comments.


(James Dietle) #110

I put up my recent blog post on both Medium and Linkedin if anyone had any thoughts or insight.


(Vitaly Bushaev) #111

Hey guys! Thank you everybody here! I’ve been getting incredibly encouraging feedback. I wouldn’t even think of doing it without this community.
Also I wrote another post on optimization using SGD with momentum. which is what used by default in fastai library(or at least was used couple days ago). I’d appreciate any feedback.


(Miguel Perez Michaus) #112

Great post. I liked visualizations where can be seen the “inertia/lag” of momentum increasing as it gets bigger, also the one of Nesterov’s momentum… I enjoyed all your post but this one my favourite by now! :grinning:


(Anand Saha) #113

Just wow!


(Nikhil B ) #114

Awesome read. The ‘Why momentum works’ section was quite useful. Interesting to learn about the Nesterov gradient :slight_smile:


(Nikhil B ) #115

Great article! It was nice to see intuitive explanations of the code used to achieve learning rate annealing. I guess these explanations will be the most helpful for the general audience.


(Jeremy Howard) #116

These are great articles! :slight_smile: @mindtrinket it was very interesting to hear more about your journey…


(Aditya) #117

This article might serve as a Bible for Kaggle competitions related to Classify Images…(atleast I came to know about many new things, hope community finds it useful)

https://flyyufelix.github.io/2017/04/16/kaggle-nature-conservancy.html

Can someone share insights on CAM’s?(Classes Activation Maps)


(Jeremy Howard) #118

We’ll do them next week :slight_smile:


(Aditya) #119

If someone wants to install pytorch on win10, this might help (haven’t tried myself)


(sergii makarevych) #120

Hey everyone. fastai community has created tons of materials here: from bash, tmux, running aws, kaggle cli to weights decay, adaptive learning rates articles and further how to put this all together and get some good prediction accuracy. I thought it might be useful for further groups, for Jeremy and for us if we join and together extract these materials into some non-forum and rather book-type structure “Depp learning in fastai” .

Who might be interested in joining this crusade ?


(Anand Saha) #121

Me!

This forum also has precious nuggets of wisdom spread all around in the conversations. We can curate them too.

Let me know how you want to proceed with this.