This is a wiki post - feel free to edit to add links from the lesson or other useful info.
Jeremy FYI DDIM nb still has betamax of 0.02 and transformi -1 to +1
Great lesson Thanks
Super cool to see how easy it was to use a callback to send stuff to WandB. I had written this quick and dirty CB to do something similar for tensorflow and so far it just works:
#|export
from torch.utils.tensorboard import SummaryWriter
class TensorboardCB(Callback):
order = MetricsCB.order + 1
def __init__(self, name=None): self.writer = SummaryWriter(comment=f'_{name}')
def after_batch(self, learner: Learner):
# Log loss
train = 'train' if learn.model.training else 'validation'
idx = learn.dl_len*learn.epoch + learn.iter
self.writer.add_scalar(f'loss/{train}', learn.loss.item(), idx)
self.writer.flush()
def after_epoch(self, learn: Learner):
if hasattr(learn, 'recorder'):
# Log all other metrics after each epoch
d = learn.recorder[-1]
for k, v in d.items():
if k == 'loss': continue
self.writer.add_scalar(f'{k}/{d.train}', v, d.epoch)
self.writer.flush()
def after_fit(self, learner: Learner): self.writer.close()
And this is the result on the TB side:
It’s cool to be able to see the different runs and all, but after playing with Tensorboard a bit, it’s become clear that that the solutions offered by WandB as well as others seem much more complete. One big omission is being able to associate a set of hyperparams to each run, and be able to easily separate different projects. Tensorboard has had this issue open since ~2017 . Nonetheless, getting the callback to work was a fun exercise.
2 Likes