Good idea to not @ moderators unless really needed. Imagine all the notifications they’d be getting.
Exactly.
I think it should be
y = x@a + torch.ones(n).uniform_(-1, 1)
https://forums.fast.ai/t/lesson-2-in-class-discussion/28632/4?u=ste
what are the dimesions of the y tensor?
This is mathematically true, but we don’t really care for the explanation of SGD, which is all this is about
Correct.
The matrix math we’re learning now, is this important to learn as a first-principles understanding or will it be frequently used day-to-day with AI?
number of rows of y will be same as # rows of x, but Its rank is 1. Essentially (n,1)
It’ll be used extremely frequently, so you want to get comfortable with it, if you aren’t yet. Good news is, you have a week ahead of you
I realised that trying to understand why the predicted “a” was about .5 far from the expected a=(3,2)
Along the same lines, how deeply do we need to understand linear algebra? To the level of the computational linear algebra course? And, are there more efficient resources to get up to speed aside from watching semesters of MIT/Stanford classes or going through whole topics within Khan Academy?
array <=> matrix <=> tensor
dimension <=> rank
gradient <-> derivative <-> slope
You don’t need to do the computational linear algebra course in order to do the deep learning course. We recommend just watching videos for the individual topics you need, as needed.
Edited to add: Learning on an as-needed basis keeps you from spending a lot of time on material that you later find out you don’t need.
Here is a good resource of what will be needed:
The Matrix Calculus You Need For Deep Learning
Does it move all those 4 things at the same time, or does it choose the best one and do that per iteration?
It chooses the best thing you can do with those two parameters in one step.
how is the magnitude of the movement decided in GD
That is your parameter called learning rate
is the grad a distance from actual?