Optimizer / OptimWrapper is not callable . Trying to train only some parts of the network

1.As custom pytorch optimiser :
def opt_func(params,lr,**kwargs): return OptimWrapper(torch.optim.Adam(params, lr))

learn = Learner(dsets,vgg.cuda(), metrics=accuracy , opt_func=opt_func(vgg.classifier.parameters(),2e-3))
learn.fit_one_cycle(2,5e-3)

/usr/local/lib/python3.6/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
105 moms=None, cbs=None, reset_opt=False):
106 “Fit self.model for n_epoch using the 1cycle policy.”
–> 107 if self.opt is None: self.create_opt()
108 self.opt.set_hyper(‘lr’, self.lr if lr_max is None else lr_max)
109 lr_max = np.array([h[‘lr’] for h in self.opt.hypers])

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in create_opt(self)
147 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
148 def create_opt(self):
–> 149 self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
150 if not self.wd_bn_bias:
151 for p in self._bn_bias_state(True ): p[‘do_wd’] = False

TypeError: ‘OptimWrapper’ object is not callable

2.fast ai adam:
learn = Learner(dsets,vgg.cuda(), metrics=accuracy , opt_func=Adam(params=vgg.classifier.parameters(),lr=2e-3))
learn.fit_one_cycle(2,5e-3)

/usr/local/lib/python3.6/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
105 moms=None, cbs=None, reset_opt=False):
106 “Fit self.model for n_epoch using the 1cycle policy.”
–> 107 if self.opt is None: self.create_opt()
108 self.opt.set_hyper(‘lr’, self.lr if lr_max is None else lr_max)
109 lr_max = np.array([h[‘lr’] for h in self.opt.hypers])
/usr/local/lib/python3.6/dist-packages/fastai/learner.py in create_opt(self)
147 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
148 def create_opt(self):
–> 149 self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
150 if not self.wd_bn_bias:
151 for p in self._bn_bias_state(True ): p[‘do_wd’] = False

TypeError: ‘Optimizer’ object is not callable

it seems you have to pass your opt_func function as an object to your learner as shown here.

@boJa is indeed correct. As the name implies you should pass the func directly to learner, such as:

learn = Learner(dls, model, opt_func=opt_func)

fastai handles the parameterization of it (and if you pass in a splitter the making of param groups)

splitter function only outputs the trainable parameters to be used .
how do i set different lr for different layers , as we do in pytoch optimisers by passing a dict .
perhaps i should use optimwrapper and use callbacks to change .

Splitter assigns param groups for your optimizer, so you can do crazy things like fit(lr=slice(1e-3,1e-6,1e-4), ex the resnet split: https://github.com/fastai/fastai/blob/master/fastai/vision/learner.py#L105

You should pass in your splitter with your groups in as:

Learner(dls, model, opt_func=opt_func, splitter=mysplitter)

For example with the resnet I showed above we have:

def  _resnet_split(m): return L(m[0][:6], m[0][6:], m[1:]).map(params)

(they all must have that L().map(params), and params isn’t predefined by you, it’s from fastai)

Also worth noting this is for a cnn_learner created resnet, so m[0] refers to the body and m[1] refers to the head. This can also be done with L(m.body, m.head).map(params) if your layer groups should utilize an attribute instead

Trainable parameters are dictated by calling learn.freeze(), and you can freeze up to n parameter groups, so they’re related but separate things

Thanks a lot ! This clears a lot of things .