Yes it seems that you now got it ! ![]()
Just to remember you this is how each weight is updated: weight = weight - learning_rate * gradient
So I would maybe just rephrase this as: ‘Update all the parameters according to their gradient with respect to that loss times the learning rate’
Also, if you want to have more information about the reasons why we only process a subset (i.e. a mini batch) of the training set each time, take a look at this blog post. Basically it is to take advantage of vectorization capabilities of you GPU and to be computationally efficient.