Hi everyone, I’m trying to figure out how I could possibly insert a second optimizer, for the ArcFace Loss function, as suggested from the pytorch-metric-learning website.
loss_fn = losses.ArcFaceLoss(num_classes=num_classes,
embedding_size=embedding_size, margin=28.6)
model_optimizer = torch.optim.Adam(embedding_net.parameters(), lr=lr, decay=wd)
loss_optimizer = torch.optim.Adam(loss_fn.parameters(), lr=lr)
learn = Learner(
data,
embedding_net,
loss_func=loss_fn, # what to insert here?
metrics=None, # working on cosine_similarity
opt_func=model_optimizer
)
My idea was to set up something like this:
for epoch in range(epochs):
learn.fit(1, lr=lr)
model_optimizer.step(learn.recorder.loss)
loss_optimizer.step(learn.recorder.loss)
Does it have any sense? I don’t think I would fully benefit from the training algorithm if I basically restarted it 1 epoch at a time…