Wiki / Lesson Thread: Lesson 9

(melissa.fabros) #1

Lesson Resources

Notes: (Under Construction)

Review of PyTorch components by writing logistic linear regression

Softmax vs. Sigmoid activation functions

Introduction to Gradient Descent

Introduction to Learning Rates

Introduction to Broadcasting


Wiki / Lesson Thread: Lesson 8
About the Intro to Machine Learning category
(Prince Grover) #2

I have a few questions from the class –

  1. net = nn.Sequential( nn.Linear(28*28, 10), nn.LogSoftmax() )

In last non-linear layer, why did we use logsoftmax, not softmax? Weren’t we exponentiating outputs from 2nd last layer so as to make them all +ve ? Why back to log after doing [exp]/[sum of exp].

  1. n.Parameter(torch.randn(*dims)/dims[0])

What is the reason of dividing by dims[0]. I tried and it doesn’t work if we don’t divide by dims[0]. By it doesn’t work, I mean fit() gives loss = nans and very bad accuracy.

Thanks :slight_smile:


(Jeremy Howard (Admin)) #4

I just posted the video.

1 Like

(Jeremy Howard (Admin)) #5

The loss functions in pytorch generally assume you have LogSoftmax, for computational efficiency reasons:

This is He initialization ( . Although I may have forgotten a sqrt there…

Without careful initialization you’ll get gradient explosion. We discuss this in the DL course.

1 Like

(Prince Grover) #6

Helpful links. Thanks :slight_smile:


(Jidin ) #7

Blogpost on Broadcasting in Pytorch


1 Like

(Tamada Rajesh Kumar) #8

hi there,
I have a question pertaining to optimizer.zero_grad() . I have gone couple of times over the section where it is explained why do we have to call this function.
I still don’t understand it .
From pytorch forum , i understand unless for the special cases where one wants to simulate bigger batches by accumulating the gradients , one has to invoke optimizer.zero_grad() to clear the grandients for the next batch.
Would like to understand Jeremy explanation though.