Fastai Library question about crit (learner.py/nlp.py)

I have a question regarding the Fastai library… Tagging @jeremy because I’m not sure who else is working on the library as a contributer, but I’d love to hear back from any and all contributers as I’m interested in helping out with the project.

My first question is related to self.crit in the Learner ( learner.py ), although I’m looking at RNN_Learner (nlp.py) specifically which inherits from Learner. I’m interested in adding functionality to explicitly ignore padding, which I was pleased to see after a bunch of research is already a part of F.cross_entropy, but I wanted to confirm that the way I’m doing so is reasonable.

self.crit = lambda i, t: F.cross_entropy(i, t, ignore_index=0, size_average=True)

Basically what I want to confirm is that self.crit is designed to only ever be called with the input (i) and the target (t).

This seems to be the case in model.py;

loss = raw_loss = self.crit(output, y)

and

return preds, self.crit(preds, y)

are the only references I see and all the other references are assignments in various model.

Is the lambda function I’ve written the intended approach to parameter passing for crit? It seems like the best solution for now as it avoids passing the args through 3-4 levels of abstraction but I thought I’d check in because I’d like to contribute my work to the library eventually.