Lesson 7 In-Class Discussion

Thank you @jeremy and @yinterian and fellow classmates for the incredible support throughout the course. While most of us found it difficult to just go through the live lecture, make time from our full time jobs, absorb and work through the course content and assignments, we realised what an incredible effort Jeremy and his colleagues have put in to not only run one dl1 lecture a week but also 2 lectures of ml1 every week and assist us so patiently on the forums.

This is no doubt a monumental effort. Iā€™m sure all of us are inspired by your work ethic and would look upto you anyday! :slight_smile:

3 Likes

These are the initial activations, not the weights. Go back to the first non-loop version to see why this is so.

See my use of torch.multinomial in the final test cell in class today (I didnā€™t discuss it in class, but youā€™ll see it in the notebook).

3 Likes

I use twitter. I follow people who have been useful in the past. Look at my twitter likes history to get an instant start!

3 Likes

Thanks for everything! My biggest problem now is having too many projects to work on! Looking forward to part 2!

@jeremy @yinterian Thank you so much for putting up this course.

This is exactly what I needed. I have been able to overcome the inertia (read fear) to indulge in paper reading, get experimental and connect the dots. Most importantly, there is a sense of direction.

In my survey, fast.ai is the only applied deep learning course available out there.

This course is more than what meets the eye. The exercises and the discussions that follow are enough to keep you engaged for weeks! This was actually an immersive 7 week program :slight_smile: (I am actually trailing and going to use the coming days/weeks to catchup)

We the international fellows have been a lucky bunch to have had the privilege to attend this live! The best we can do now is to get more people onboard and experience it. And ofcourse apply the learnings to what we work upon.

The journey is long and this was an excellent boost.

You are the best! :slight_smile:

As a novice, this is how I feel now :slight_smile:

kungfu deep learning.

22 Likes

Now you are talking like AlphaZero.

1 Like

Hey @anandsaha if I had known you know kungfu, I wont be mocking you with ā€œare you the policeā€ :wink:

There are rumours that part 2 v2 gonna include 50 lessons ā€¦

1 Like

@jeremy, does the international fellowship cover part 2 as well ? and if so when will it start ?

Haha, we should meet :slight_smile:

Brace for impact!

1 Like

I echo everyting said here - itā€™s been such a great experience participating in this course. I am limited in my experience and time to be able to extract maximum benefit just yet but feel very well equipped to revisit everything and try and impliment it and really cement things. I am utterly grateful for @jeremy and @yinterian efforts and collaboration, but most especially to @jeremy for making something accessible to me by the way he teaches, the insights and the competence he shows have helped me so very much to feel confident that I am participating in something awesome!

Things are within my grasp that I had previously considered were beyond my understanding. I would love to come back for more next year, and am dedicated to accomplish something good with all of this over the next few months!

6 Likes

Iā€™ve found the Wild Week in AI to be helpful: https://www.getrevue.co/profile/wildml

1 Like

Just posted the video to the top thread.

1 Like

Thank you, @jeremy! The course has given me so many new tools, knowledge, and experiences that I cannot even start to describe how grateful I am. Before the part 2 starts, I will watch and re-watch DL and ML courses to know them by heart. I will create another PR. And I will make submission to at least 2 Kaggle competitions. Thanks once again!

5 Likes

I donā€™t know how to thank you enough, @jeremy, @yinterian, and @everyone kindly helped in the forums! The step by step implementation started with shallow NN ended with ResNet shown by @jeremy last night was my favorite.This course was really life-changing to me. Iā€™m kind of just standing on a starting point. Iā€™ll watch and watch through all DL and ML lecture videos and replicate the notebooks on my own dataset to digest deeply. Hope to see you all again in Part2v2.

7 Likes

I donā€™t quite fully grok it in the context of ReLU and all, but hereā€™s a quick summary of my naive understanding. Please feel free to clear my mis-understanding here.

  • backpropagation through layers => chain rules applied with functions => lots of matrix multiplications (f1*f2*f3*......fn)

Given that, the output of multiplications will ā€˜explodeā€™ (increase suddenly) proportional to the weights in those layers/functions, even if the change in indiv. layer was fairly small (but larger than 1, for eg. 1.1^32 = 21.1...).

Now, if you constrain the outputs of those layers/functions to remain within the (0, 1) or (-1, 1) limit, they wouldnā€™t exponentially increase (and hence not explode). Hence, the usage of sigmoid(0,1)/tanh(-1,1) squishing functions.

1 Like

Very much agree !

Thanks tons @jeremy and @yinterian for all the inspiration, education and making this resource freely available. Bracing myself for rewatching all the lectures, more experimentation and of course, for part 2.

Depending your nationality, I applied the visa via ESTA (Electronic System for Travel Authorization). This visa lasts for 2 years or your passport expiry date, whichever the eariler. To apply please visit the official website of the Department of Homeland Security. Please check carefully if there is any update of laws and regulations and comply with the limitations of the visa.

When I lowered bptt and bs in the imdb notebook, I was able to run through the cells, but it took hours to go through the 14 epochs of learner.fit. Otherwise, my 6GB gpu ran out of memory. Hereā€™s what I used:

bptt = 50
bs = 32

Iā€™ll try again with bptt=70, keeping bs to 32.

1 Like