In this tutorial the section “Changing the optimizer” explains pytorch_adamw, I was running this exact notebook in Google Colab pro. After creating the learner when you run learn.lr_find() then I received.
TypeError: __init__() missing 1 required positional argument: 'opt'
(i.e., the in the constructor of OptimWrapper class.)
The same error has occurred again in learn.lr_find()
I changed the definition of the function pytorch_adamw from-
@delegates(torch.optim.AdamW.__init__)
def pytorch_adamw(param_groups, **kwargs):
return OptimWrapper(torch.optim.AdamW([{'params': ps, **kwargs} for ps in param_groups]))
to
@delegates(torch.optim.AdamW.__init__)
def pytorch_adamw(param_groups, **kwargs):
return OptimWrapper([{'params': ps, **kwargs} for ps in param_groups], torch.optim.SGD)
and now it is working fine in colab pro. Because as suggested in the error msg, the constructor of OptimWrapper was expecting a second positional argument, which is an optimizer, so I provided the torch.optim.SGD as the second positional parameter. while the first positional parameter is an iterable.
The error was occurring because of this.
The constructor signature of OptimWrapper has changed.
OptimWrapper now has a different constructor signature, which makes it easier to wrap PyTorch optimizers.
therefore the corresponding code can be changed as follows. @delegates(torch.optim.AdamW.init )
def pytorch_adamw(param_groups, **kwargs):
return OptimWrapper([{‘params’: ps, **kwargs} for ps in param_groups], torch.optim.AdamW )