Thread for Blogs (Just created one for ResNet)

(Apil Tamang) #1

Hi all…

I went ahead and created my first blog on ResNet. I really wanted to understand how ResNet worked, and it didn’t feel like any of the existing blog really hit the kernel of the matter! I hope you all check it out. It’s supposed to be a very easy read.

Any ideas for improvement is welcome. I wish, for that matter, that I could understand the mathematics in better detail too, but don’t feel like I can. In particular, I wanted to understand how the skip connections actually help the gradients, and/or ultimately the classification problem. The detailed math seems to be discussed here.

Thanks and enjoy

Blog Post Discussion and Feedback
Lesson 4 In-Class Discussion
(Jeremy Howard) #2

Looking good! BTW if you mention you’re doing the course when you blog we can share you blog as an example of cool stuff our students are doing. And it can help bring more folks to the course.

(Miguel Perez Michaus) #3


I have created a blog to post insights during the course, Ive imaginatively named it “learning on machine learning”.

First post here

It’s not much more than an insightfull curves chart but I think it’s a cool way of using the learning rate finder.:grinning:


I really liked your post. You might be using simple techniques - as you state in your article - but you put them to good use to tell an interesting story.

One comment though - you might want to run a spellcheck on the text and also proof read it for issues that stand out. One trick to proof reading is reading the article out loud. Also, the plot really needs a legend and then I think you can remove the code that generated it (and that doesn’t contain any info on which line color is for which batch size either).

Really nice read nonetheless, thx for sharing.

(Miguel Perez Michaus) #5

@radek, thanks a lot for your feedback. I will take care of those-important- details you comment. I’m quite new to 2.0 stuff (believe it or not, almost zero internet footprint until recently) and also learning now Python (R programmer here), so still many things to improve. But this course, you all guys and Jeremy’s tireless example has motivated me to try participate and contribute a bit more.:grinning:


I think you are off to a great start :slight_smile: and I think you have a very nice way with words. The only reason I commented on your post - as I also state in the PM :wink: - is that I liked it a lot and I think that with minor changes you can make it even better.

(Jeremy Howard) #7

I’d like to share this - as @radek says it needs a little light editing so I’ll wait until that’s done. I also suggest explaining that first LR vs loss curve (i.e. “learning rate finder”), since it’s super awesome and pretty much no-one knows about it. In explaining it, it would be good to show the LR vs batch curve as well, and link to the original paper (already linked in the lesson 1 notebook).

(Miguel Perez Michaus) #8

Great! I will add those explanations, + editing (also found learning rate finder useful and awesomely informative, what gave me the idea for the post). So,I will update and report, thanks for the feedback! :smile:

(Abdelrahman Ahmed) #9

Not sure if this would be relevant to the thread, but prior to Part 1v2 starting, I was still going through the original Part 1 of the course. I decided to leverage what I’ve learned in a university assignment, and figured I would blog about it and share my final assignment report and code :slight_smile:

(Jeremy Howard) #10

In this course we’ll learn how to get 94.5% on IMDB, so hopefully you’ll be able to update your model!

(Miguel Perez Michaus) #11

@jeremy, I updated the post with the expanded info about the “learning rate finder”, referencing to the paper of Leslie N. Smith and adding labels to the plots, incorporating also the kind advice of @radek as much as I’ve been able to.

I will be AFK for some hours now (its quite late here) but will be glad to change or modify any part if you find it not precise or (I hope will not be the case), just plain wrong. Anyway your previous advice made the post more informative and solid, very happy about that!


Not sure if we are still doing this thread, but I wrote a blog post that I write about here.

It is on how far I got in tackling whether smoother surrounding error surface leads to better generalization.

If you would have any thoughts or comments I would love to hear them! :slight_smile:

(Anand Saha) #13

I wrote one about the ResNet architecture - I wanted to have my intuitions clear about what ‘Residual’ really means in ResNet. I do hope I got it right :slight_smile:

Here is the blog post . The next architecture to understand would be (no surprises) ResNeXt!

As a passing note, this course has been an amazing catalyst to make me think in multiple dimensions. And the forum feels like 7 days of awesome learning per week! :slight_smile: Thanks @jeremy!

(Ravi Sekar Vijayakumar) #15

Superb post Saha!

(Ismaël Koné) #16

This is how I discovered the fastai course through a medium blog post.

(Jeremy Howard) #17

Nice! Which one?

(Ismaël Koné) #18

I was learning Pytorch and google for tutorials about it and found:

where the author talked about fastai and found you!

(Jeremy Howard) #19

@anandsaha this is absolutely exceptional - terrific work :slight_smile:

(Jeremy Howard) #20

Interesting to see how much our performance has improved since then!

(Sanyam Bhutani) #21

Here is my First Technical Blog post. ‘CNN in 5 minutes.’

It’s an ELI5 like explanation to the topic. I originally wanted to wait till I could blog about an awesome idea but I decided to pull the trigger and go with the FastAI philosophy to help the learner who’d be a step behind me.

I’d definitely love to hear your feedback.