How does Jeremy code run so fast? I have an 8 GB RAM & CPU and it takes ages to run the pytorch LoCs? Is there a way to use all the cores to run code faster?
I guess what I meant was why is it not like get_column(2) or something that’s intuitive for my brain… and wondered where the name came from.
What guides the choice for non linearity? Is it just a case of choose the best that works for your system? or is there something in the data that guides the choice of non linearity
And you trust fastai to include it if “it just works”
He uses a beefy Titan RTX card on his laptop and hence! That definitely helps speed stuff along
Speaking of jargons – what is backpropagation ? forward pass & backward pass together ?
In the fastai library NN example, when using resent18 (or deeper networks) as a model, do we need to set pretrain=True
?
This is as close as neural nets from scratch we get in fastai course I think. Where many other courses start here we are after building top level multi classifiers & strong foundation on Ethics.
If you are struggling with the questionnaire or check your answers, check out the questionnaire solutions
backpropagation is just another name for the backward pass.
Nope the models from the torchvision model zoo are assumed to be pretrained.
By default it is set to true.
cnn_learner(dls, arch, loss_func=None, pretrained=True,
cut=None, splitter=None, y_range=None, config=None,
n_out=None, normalize=True, opt_func='Adam', lr=0.001,
cbs=None, metrics=None, path=None, model_dir='models',
wd=None, wd_bn_bias=False, train_bn=True,
moms=(0.95, 0.85, 0.95))
Here is the link of documentation cnn_learner
A friendly and respectful reminder to not worry about jargon too much and keep playing with the code, the trap of looking into these rabbit holes always takes me into reading theory which we’ll eventually learn
Agree. The other course I would say would be from scratch outside of Prof. Fei Fei Li was from Andre Karpathy’s CV course (no ethics focus there, just a lot of stuff from scratch which is awesome). @jeremy and @rachel have done a phenomenal job with the new material here I can see (so excited about the new O’Reilly book coming out).
Shared this in our Beginner Study Session last week … but figured I’d share it again here:
https://pytorch.org/tutorials/beginner/nn_tutorial.html
IMO, the best “how does all this work from scratch to PyTorch” article around! Tremendously valuable in guiding me how to debug, visualize, and build/train models.
anyone got the code snippets to show the params as an image?
Pretty great blog on backprop: https://colah.github.io/posts/2015-08-Backprop/
you can try also with hook and callback I guess.
Here is the snippet of code used:
# Grab the model in variable m
m = learn.model
# Grab the parameters
w, b = m[0].parameters()
#plot the weight parameter
show_image(w[0].view(28,28))