Thank you @jeremy and @yinterian and fellow classmates for the incredible support throughout the course. While most of us found it difficult to just go through the live lecture, make time from our full time jobs, absorb and work through the course content and assignments, we realised what an incredible effort Jeremy and his colleagues have put in to not only run one dl1 lecture a week but also 2 lectures of ml1 every week and assist us so patiently on the forums.
This is no doubt a monumental effort. Iām sure all of us are inspired by your work ethic and would look upto you anyday!
This is exactly what I needed. I have been able to overcome the inertia (read fear) to indulge in paper reading, get experimental and connect the dots. Most importantly, there is a sense of direction.
In my survey, fast.ai is the only applied deep learning course available out there.
This course is more than what meets the eye. The exercises and the discussions that follow are enough to keep you engaged for weeks! This was actually an immersive 7 week program (I am actually trailing and going to use the coming days/weeks to catchup)
We the international fellows have been a lucky bunch to have had the privilege to attend this live! The best we can do now is to get more people onboard and experience it. And ofcourse apply the learnings to what we work upon.
The journey is long and this was an excellent boost.
I echo everyting said here - itās been such a great experience participating in this course. I am limited in my experience and time to be able to extract maximum benefit just yet but feel very well equipped to revisit everything and try and impliment it and really cement things. I am utterly grateful for @jeremy and @yinterian efforts and collaboration, but most especially to @jeremy for making something accessible to me by the way he teaches, the insights and the competence he shows have helped me so very much to feel confident that I am participating in something awesome!
Things are within my grasp that I had previously considered were beyond my understanding. I would love to come back for more next year, and am dedicated to accomplish something good with all of this over the next few months!
Thank you, @jeremy! The course has given me so many new tools, knowledge, and experiences that I cannot even start to describe how grateful I am. Before the part 2 starts, I will watch and re-watch DL and ML courses to know them by heart. I will create another PR. And I will make submission to at least 2 Kaggle competitions. Thanks once again!
I donāt know how to thank you enough, @jeremy, @yinterian, and @everyone kindly helped in the forums! The step by step implementation started with shallow NN ended with ResNet shown by @jeremy last night was my favorite.This course was really life-changing to me. Iām kind of just standing on a starting point. Iāll watch and watch through all DL and ML lecture videos and replicate the notebooks on my own dataset to digest deeply. Hope to see you all again in Part2v2.
I donāt quite fully grok it in the context of ReLU and all, but hereās a quick summary of my naive understanding. Please feel free to clear my mis-understanding here.
backpropagation through layers => chain rules applied with functions => lots of matrix multiplications (f1*f2*f3*......fn)
Given that, the output of multiplications will āexplodeā (increase suddenly) proportional to the weights in those layers/functions, even if the change in indiv. layer was fairly small (but larger than 1, for eg. 1.1^32 = 21.1...).
Now, if you constrain the outputs of those layers/functions to remain within the (0, 1) or (-1, 1) limit, they wouldnāt exponentially increase (and hence not explode). Hence, the usage of sigmoid(0,1)/tanh(-1,1) squishing functions.
Thanks tons @jeremy and @yinterian for all the inspiration, education and making this resource freely available. Bracing myself for rewatching all the lectures, more experimentation and of course, for part 2.
Depending your nationality, I applied the visa via ESTA (Electronic System for Travel Authorization). This visa lasts for 2 years or your passport expiry date, whichever the eariler. To apply please visit the official website of the Department of Homeland Security. Please check carefully if there is any update of laws and regulations and comply with the limitations of the visa.
When I lowered bptt and bs in the imdb notebook, I was able to run through the cells, but it took hours to go through the 14 epochs of learner.fit. Otherwise, my 6GB gpu ran out of memory. Hereās what I used: