Bug: learn.summary() does not work on 2nd transfer learning?!

I’m almost 90% certain this is a bug in the official fastai v2 release, which did not exist in the v2 pre-release.

I’ve noticed in the official v2, learn.summary() tries to run something(?) as evidenced by the table of epoch, loss, time, etc… when you run learn.summary(). This does not happen in the v2 pre-release. Normally the table of epoch, loss, time only appears when you start the training use .fit() command.

Now when transfer learning for the 2nd time - to be specific, 1st time I load pretrained weights, 2nd time I load the last model trained by fastai, I get this error:

 57         elif val <= self.first_its or val >= self.last_v + self.wait_for or val >= self.total:
 58             cur_t = time.time()
 59             avg_t = (cur_t - self.start_t) / val
 60             self.wait_for = max(int(self.update_every / (avg_t+1e-8)),1)
 61             self.pred_t = avg_t * self.total
 
 AttributeError: 'NBProgressBar' object has no attribute 'start_t'

This only happens if I call learn.summary() on the 2nd transfer learning, if I comment it out, it works fine. Is anyone seem this?

It looks related to this bug: https://github.com/fastai/fastprogress/issues/41

I’ve isolated this bug to some recent refactoring in learn.summary():

Details are in that bug report. Would it be possible to revert to the old way learn.summary() is being generated? I noted the new code uses learn.get_preds, which IMO is a very bad idea, as I’ve had tons of compatibility issues with learn.get_preds, when doing non-standard stuff.

def layer_info(learn, *xb):
    "Return layer infos of `model` on `xb` (only support batch first inputs)"
    def _track(m, i, o): return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
    with Hooks(flatten_model(learn.model), _track) as h:
        batch = apply(lambda o:o[:1], xb)
        with learn: r = learn.get_preds(dl=[batch], reorder=False)
        return h.stored

The old code uses model.eval():

def layer_info(model, *xb):
    "Return layer infos of `model` on `xb` (only support batch first inputs)"
    def _track(m, i, o):
        return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
    layers = [m for m in flatten_model(model)]
    with Hooks(layers, _track) as h:
        _ = model.eval()(*apply(lambda o:o[:1], xb))
        return xb,h.stored