Solved: MixedItemList Help?

Hi all,

Quick question. I’m trying to do lr_find using MixedItemList with two inputs, and I’m getting an input error (see trace at the end).

What I have tried:

x,y = next(iter(databunch.train_dl))
with torch.no_grad():
  out = learn.model(x)

I don’t receive a mismatch error here, what could be the issue?

(@sgugger as I know you’re the one that implemented the MixedItemList). Should my forward accept two instead of just one? IE: forward(self, inp1, inp2)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-163-d81c6bd29d71> in <module>
----> 1 learn.lr_find()

/usr/local/lib/python3.6/dist-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, **kwargs)
     30     cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div)
     31     a = int(np.ceil(num_it/len(learn.data.train_dl)))
---> 32     learn.fit(a, start_lr, callbacks=[cb], **kwargs)
     33 
     34 def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner:

/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    176         callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
    177         fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
--> 178             callbacks=self.callbacks+callbacks)
    179 
    180     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

/usr/local/lib/python3.6/dist-packages/fastai/utils/mem.py in wrapper(*args, **kwargs)
     83 
     84         try:
---> 85             return func(*args, **kwargs)
     86         except Exception as e:
     87             if "CUDA out of memory" in str(e) or tb_clear_frames=="1":

/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     98     except Exception as e:
     99         exception = e
--> 100         raise e
    101     finally: cb_handler.on_train_end(exception)
    102 

/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     88             for xb,yb in progress_bar(data.train_dl, parent=pbar):
     89                 xb, yb = cb_handler.on_batch_begin(xb, yb)
---> 90                 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     91                 if cb_handler.on_batch_end(loss): break
     92 

/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     18     if not is_listy(xb): xb = [xb]
     19     if not is_listy(yb): yb = [yb]
---> 20     out = model(*xb)
     21     out = cb_handler.on_loss_begin(out)
     22 

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

TypeError: forward() takes 2 positional arguments but 3 were given

Seems like it from your stack trace. Can’t be sure since this has been a while since I have implemented this.

1 Like

Fixed it right up after debugging where and how (that was my issue!) FYI for those who want to learn how to debug something like this, here’s what I did.

I wrapped my model in a nn.Module like so:

class myModel(nn.Module):
    
    def __init__(self, model):
        super(myModel, self).__init__()
        self.model = model
    def forward(self, inp1, inp2):
        self.model([inp1, inp2])

This way I could control how to input things into my model. From there I payed attention to how the stack trace changed. :slight_smile:

Nice! I’d suggest a more generic wrapper that accepts N inputs :wink: :

class myModel(nn.Module):
    
    def __init__(self, model):
        super(myModel, self).__init__()
        self.model = model
    def forward(self, *args):
        self.model(args) # <- Maybe, list(args) if you want a list and not a tuple

That said, I think that pytorch expect a forward method with N+1 arguments for a model of N inputs as it does: result = self.forward(*input, **kwargs). So, I try to implement my forward passes as pytorch expects to avoid this error.