Shouldn't we able to pass an activ function to Learner.get_preds

Hi !

I am currently going back and forth through different loss functions, some that are custom and some that are basic pytorch. I find it therefore a bit annoying that Learner.get_preds automatically calls loss_func2activ and doesn’t allow for custom activation pass, as I need to change the behavior after get_preds depending on it applying a sigmoid or not (basically, if I use nn.BCEWithLogitsLoss it will apply a sigmoid, but if I use a custom loss it won’t). I think it could be better to add an activ argument in Learner.get_preds (similarly to the get_preds function that is called then) and check if it is None. If it is, then it calls loss_func2activ.

It would allow to create a consistent behavior when switching between custom and preexistent losses.

Sure, this does seem like a good idea. Fell free to suggest a PR in that direction.

Nice, I’ll work on this then !

Oh I see I was actually too slow, thanks for adding the feature then !

Yes somehow beat you to it :wink:

hello,i meet with the same problem, have you make a pr for it ?

it has even be released I think, just upgrade fastai