Yes, it is magic. [just kidding]
I have the same question almost 5 months ago where you don’t call loss_func, you don’t do any one-hot labeling and fastai does it all for you. (That’s why sometimes I don’t like top - down )
The trick is the label_cls in the datablock API, when you call label_from…, if you don’t specify your label_cls, fastai will do it for you under the hood, and select the best loss function that fits the problem.
Currently I think support label_cls (you can check docs) are floating, multi-label…
To clarify what I meant, here is an example,
In you MNIST example, if you you just call label_from_df() and do nothing, your will have a nn.crossentropy loss for your loss function. Also, the y is in the right form turned to a multi-classification problem. Your data.c = 10
However, if you call label_from_df(label_cls=floatlist), then you will have a data.c = 1, and your model will have 1 output in the end. Also, the loss function is MSE.lossflat(). Now you have a regression model just try to minimize your prediction and y.
Also, just use lossflat(), it does normal as torch loss function. But if you ever run to problem that torch loss is complaining about preds, target shapes has dim problem, lossflat() will almost solve the problem by flatten your y.