Custom Callback not working with other callbacks - V2

Hi guys,
As part of my adventure using fastai_v2, I’m trying to reimplement some of my v1 models. I’m having trouble with a custom callback. Here’s a prototype:

class TestCB(Callback):
  def __init__(self, text:str=' ', temperature:float=1., n_len:int=10, max_mols:int=5):
    self.text = text
    self.temperature = temperature
    self.n_len = n_len
    self.max_mols = max_mols

  def after_train(self):
    stop_index = self.dls.train.vocab.index(EOS)
    seqs = []
    num = self.dls.train_ds.numericalize
    xb = num(self.text).to(self.dls.device)
    preds,_ = self.get_preds(dl=[(xb[None],)])

    for _ in range(self.n_len):
      res = preds[0][-1]
      if self.temperature != 1.: 
        res.pow_(1 / self.temperature)
      idx = torch.multinomial(res, 1).item()
      if idx != stop_index:
        xb = xb.new_tensor([idx])[None]

    seqs = ''.join([num.vocab[i] for i in seqs if num.vocab[i] not in [BOS, PAD, EOS]])

This callback is supposed to generate max_mols SMILES strings (an 1D representation of a molecule) of length n_len at the end of every epoch. Starting with a seed token (i.e., text), the model will keep sampling tokens until the EOS is sampled; when that happens the iteration stops.

The callback does work, but only alone! When I tried to use the CSVLoggerCallback, I got this error:
ValueError: I/O operation on closed file.

In addition, when using TestCB, the result table is empty throughout the fit. I can’t see the losses, accuracy or any other metric if TestCB is used. Like this:

epoch train_loss valid_loss accuracy perplexity time

Am I missing something here?

Wild guess. Does the callback’s _order make sense relative to the others?

What does model.reset() do?