Is there any way to customize the output of "learn.predict"?

Use case: A token classification task where when calling learn.predict you get predictions, probabilities, and labels for even the padded tokens. I’d like to only return the results for the actual tokens.

Is there a way to do this?

If not, perhaps the function signature could be updated to accept a process_results argument. This would be a function that the results of learn.predict is passed to for post processing those results if included.

If you are using fastai2, you can apply monkey patching on Learner's predict method so as to alter its behavior. Here is a simple example on how to do that:

@patch
def predict(x: Learner, **kwargs):
    print('hello world')
    print(kwargs)
    return None

learn.predict(item = 'testing', rm_type_tfms = 1, with_input = False)
>> hello world
{'item': 'testing', 'rm_type_tfms': 1, 'with_input': False}

FYI: Monkey patching is one of the new feature introduced in fastai2. For details, you can refer to the white paper – fastai: A Layered API for Deep Learning

2 Likes

Perfect! Thanks for the tip. There are so many cool features in v2 I completely forgot about @patch.