WandbCallback memory usage question

Hello,

So I have recently been trying to use Wandb and just wanted to clarify what the code was for:

Link to WandbCallback: https://github.com/fastai/fastai_dev/blob/master/dev/70_callback_wandb.ipynb

This is the particular line I am confused about:
self.valid_dl = self.dbunch.valid_dl.new(DataSource(tls=test_tls), bs=self.n_preds)

Currently I am training a model that barely fits on my GPU(bs=1), so adding the extra valid_dl was enough to CUDA OOM my training loop, so was thinking about trying to find a way to remove the memory overhead. Just a bit confused as to why we are creating a second valid_dl on the Callback.

Edit: I am currently getting around this using:
WandbCallback(valid_dl=learner.dbunch.valid_dl)

1 Like

Just pass log_preds=False to your wandb callback to avoid that.

1 Like

Should we use same batch size as what is being used in the fit loop instead (and do multiple batches until we reach n_preds)?
It will typically be sized to the limit of GPU.
I’m thinking this problem will happen frequently with u-net models. I’ll have a practical example in a few weeks so I can let you know how it goes.

Probably yes. I’ll refactor the part that gets custom preds sometimes this week (it’s used in other places too) so I’ll keep that in mind.

1 Like