I was also confused, so I went back to the workbook to look at the actual python definitions:
def mae(preds, acts): return (torch.abs(preds-acts)).mean()
def quad_mae(params):
f = mk_quad(*params)
return mae(f(x), y)
# earlier we defined y as a list of noisy measurements
# y = add_noise(f(x), 0.15, 1.5)
I think confusion might arise (as it did for me) because quad_mae uses y (which is all of the noisy measurements) without it needing to be passed in as a parameter.
Hope this helps!
Hi. As an exercise for chapter 4 of the book I’ve tried a simple model to classify the full MNIST dataset. I’m getting about 92% with logistic regression and above 95% with a two layer model. Feedback is more than welcome: Fastbook ch4 MNIST complete | Kaggle
These concepts are no less relevant in the era of vibe coding. I am looking to make this learning experience more social and evenly-paced, if anybody would like to join me. I live in a UTC-4 timezone.
If anyone looking for a refresher in calculus before doing the book chapter or even this lecture
3b1b
have a good playlist, the first 2 chapters is enough. You could watch it all would give a lot of intuition and visual representations.