I’d like to share this - as @radek says it needs a little light editing so I’ll wait until that’s done. I also suggest explaining that first LR vs loss curve (i.e. “learning rate finder”), since it’s super awesome and pretty much no-one knows about it. In explaining it, it would be good to show the LR vs batch curve as well, and link to the original paper (already linked in the lesson 1 notebook).
Great! I will add those explanations, + editing (also found learning rate finder useful and awesomely informative, what gave me the idea for the post). So,I will update and report, thanks for the feedback!
Not sure if this would be relevant to the thread, but prior to Part 1v2 starting, I was still going through the original Part 1 of the course. I decided to leverage what I’ve learned in a university assignment, and figured I would blog about it and share my final assignment report and code
In this course we’ll learn how to get 94.5% on IMDB, so hopefully you’ll be able to update your model!
@jeremy, I updated the post with the expanded info about the “learning rate finder”, referencing to the paper of Leslie N. Smith and adding labels to the plots, incorporating also the kind advice of @radek as much as I’ve been able to.
I will be AFK for some hours now (its quite late here) but will be glad to change or modify any part if you find it not precise or (I hope will not be the case), just plain wrong. Anyway your previous advice made the post more informative and solid, very happy about that!
It is on how far I got in tackling whether smoother surrounding error surface leads to better generalization.
If you would have any thoughts or comments I would love to hear them!
I wrote one about the ResNet architecture - I wanted to have my intuitions clear about what ‘Residual’ really means in ResNet. I do hope I got it right
Here is the blog post . The next architecture to understand would be (no surprises) ResNeXt!
As a passing note, this course has been an amazing catalyst to make me think in multiple dimensions. And the forum feels like 7 days of awesome learning per week! Thanks @jeremy!
Superb post Saha!
This is how I discovered the fastai course through a medium blog post.
Nice! Which one?
I was learning Pytorch and google for tutorials about it and found:
where the author talked about fastai and found you!
@anandsaha this is absolutely exceptional - terrific work
Interesting to see how much our performance has improved since then!
Here is my First Technical Blog post. ‘CNN in 5 minutes.’
It’s an ELI5 like explanation to the topic. I originally wanted to wait till I could blog about an awesome idea but I decided to pull the trigger and go with the FastAI philosophy to help the learner who’d be a step behind me.
I’d definitely love to hear your feedback.
Good job! You should credit the source for those images you included, BTW.
Another blog’s up!:
@all would love your feedback.
@jeremy Do you think we should create a new topic, so that we can post our blogs as new topics? Feel like this thread will otherwise never end, or at least be very very long …
@all Also, I created a “Fastai” tag in medium, so i think that could help us organize our blogs into one category.
Hey I really like that @apil.tamang ! One minor issue: we are at USF, not UCSF.
I’m not sure about one thread or many - personally I don’t mind long threads, but others I’m sure have their own opinions.
BTW what’s your twitter handle? I’d like to share this
Just published a how-to on contributing to fast.ai framework:
btw, I couldn’t find the “fastai” tag, but if anyone is interested in retweeting this my twitter handle is: @0hm30w
I haven’t really used twitter aside from having created an account many years back. Reactivated my account, and followed you on twitter (how does that even work… ! lol ). Handle is @apiltamang
Feel free to tweet the link, and that would be my honor! I’ll make the change on USF, and add in a line or two.
It’s the best DL and ML community around - well worth spending time on…