Hi !
I am currently going back and forth through different loss functions, some that are custom and some that are basic pytorch. I find it therefore a bit annoying that Learner.get_preds automatically calls loss_func2activ and doesn’t allow for custom activation pass, as I need to change the behavior after get_preds depending on it applying a sigmoid or not (basically, if I use nn.BCEWithLogitsLoss it will apply a sigmoid, but if I use a custom loss it won’t). I think it could be better to add an activ argument in Learner.get_preds (similarly to the get_preds function that is called then) and check if it is None. If it is, then it calls loss_func2activ.
It would allow to create a consistent behavior when switching between custom and preexistent losses.
