Can't replicate notebook from lecture 5

Dear all,

after watching lecture 5 I wanted to replicate the notebook used there: Linear model and neural net from scratch | Kaggle
This I began building my own version: LM and NN from scratch using PyTorch | Kaggle
After implementing linear regression, the training does work, but it is much slower than in the lectures notebook (0.531; 0.516; 0.503; 0.490; 0.478; 0.465; 0.453; 0.441; 0.431; 0.425; 0.420; 0.416; 0.413; 0.411; 0.408; 0.406; 0.404; 0.403; vs, 0.536; 0.502; 0.477; 0.454; 0.431; 0.409; 0.388; 0.367; 0.349; 0.336; 0.330; 0.326; 0.329; 0.304; 0.314; 0.296; 0.300; 0.289; loss for lr=0.2 and 18 epochs). They both start with the same set of coefficients and by now I think that I have copied 1:1 all the functions from the notebook. Still they train at different speeds.
Maybe you can spot my mistake. Any help is greatly appreciated!


There are a couple of reasons why the performance of your notebook could be worse than the original, even though the code is the same.

  1. GPU acceleration can change the speed of the execution on Kaggle
  2. The speed of the machines can vary slightly on each day on Kaggle
  3. It’s possible to upgrade to Google Cloud AI for Notebooks on Kaggle. Maybe the original notebook was run with better compute than you’re using

(These ideas are collected under the premise that the code is equal in both notebooks)

If you haven’t tried it out, I’d suggest experimenting with the GPU acceleration setting and observing how it affects the performance.

Dear mw00,

thank you so much for your response. In my case, the speed sadly does not refer to time component, but to the loss after x iterations. I think, in that case the underlying machine does not have an influence on that, but please correct me if I am wrong.