Hi Jeremy, can I rewatch the todays lecture? Its been during the night here in EU, and I was not as awake as I wanted to be …
runs the model on multiple-gpus
Learning rate is too high. So parameter updates are inadequate. Neatly explained here https://towardsdatascience.com/estimating-optimal-learning-rate-for-a-deep-neural-network-ce32f2556ce0
No problem So is it something like what @KevinB said, that this number is actually the prediction percentage for the incorrect class (and not the correct one like Jeremy said) ? But if it’s not I really don’t understand this point…
Great session, also a huge thanks to the people answering questions live on here!
is this resolved anytime soon or can i do some custom changes so it work for me on my colab fastai functions
If I’m not mistaken, the video will be available right away after the lecture
correct
No, idea, I was just linking to where it’s discussed, please continue over there
the learn.lr_find() finds the optimal rate for which layers?
Pay attention to the fact that ResNet-50 takes less around half of the operations to achieve equally competitive results. So though I agree that right now Inception-v4 is the best if you want to be very accurate, but ResNet wins in terms of efficiency.
Any threads that have instructions for using multiple GPU with fastai library?
I searched, but not the one that with instructions.
Thank you in advance!
Thanks a lot!
search for nn.DataParallel
Just want to make sure. the stream link (https://www.youtube.com/watch?v=7hX8yKCX6xM) will still be re-watchable through the length of the course(till dec) right? or would it be taken down?
Great!!!
And I found a notebook with example: https://github.com/fastai/fastai/blob/bbcd4e0ce5614630aed31695af92df139d3c489f/courses/dl2/cifar10-darknet.ipynb
It is really helpful!
Thanks a lot!
Thank you for the great lecture today!
Awesome Lecture! Kudos to fast.ai team!