A side-related question, is there any way to pass in a threshold to learn.predict()? It would be helpful for multi-classification tasks … but most helpful for multi-label tasks where the threshold is often much less that .5.
As the optimal threshold is something I have setup as a metric … is there any easy way to grab the final value of that metric after training? Something like learn.metrics['opt_th'] (opt_th being the name of my metric)?
Also, is there a way to update the loss function’s threshold in a metric before other metrics are calculated?
… and btw, I’m open to submitting PRs … I just don’t want to mess with you guys as ya’ll are in the middle of finishing up v2 and with the upcoming course/book. just lmk.
Mmm, sounds like you would better have this in a Callback than a metric. Any callback sets itself as an attribute of learner with a snake-thing name, so you could then access your threshold (and set it to the loss function) with learn.best_thresh.thresh (if the callback is called BestThresh). If you run it before the Recorder (with `run_before=Recorder) then it should work I guess?
Is there a way to add the result of this callback (e.g., the optimal threshold) as a column to the table that displays during training alongside items like epoch, train_loss, valid_loss, <metrics…>, time?
Not if there is no metric associated to it AFAICT. Have a look at the superres lesson port to v2, there are several things plotted by I don’t remember how it was done.
I’m trying to figure out how to use synth_learner for creating tests for my callback and it’s not a multi-label friendly one by default. Asked in another post, is there a way to return a synth_learner that is set up as a multilabel learner?