after watching lecture 5 I wanted to replicate the notebook used there: Linear model and neural net from scratch | Kaggle
This I began building my own version: LM and NN from scratch using PyTorch | Kaggle
After implementing linear regression, the training does work, but it is much slower than in the lectures notebook (0.531; 0.516; 0.503; 0.490; 0.478; 0.465; 0.453; 0.441; 0.431; 0.425; 0.420; 0.416; 0.413; 0.411; 0.408; 0.406; 0.404; 0.403; vs, 0.536; 0.502; 0.477; 0.454; 0.431; 0.409; 0.388; 0.367; 0.349; 0.336; 0.330; 0.326; 0.329; 0.304; 0.314; 0.296; 0.300; 0.289; loss for lr=0.2 and 18 epochs). They both start with the same set of coefficients and by now I think that I have copied 1:1 all the functions from the notebook. Still they train at different speeds.
Maybe you can spot my mistake. Any help is greatly appreciated!
thank you so much for your response. In my case, the speed sadly does not refer to time component, but to the loss after x iterations. I think, in that case the underlying machine does not have an influence on that, but please correct me if I am wrong.