Learn.split(retina_net_split) is turning learn into 'NoneType'

Hello all- I am running a modified version of the pascal notebook in order to perform multi label classification for images.

My notebook works fine on my computer on fast.ai version 1.0.54, I just do not have the computational power to train on the bounding box part. So, I am using AWS with fast.ai version 1.0.41.

Everything works well up until the point just before training the learner with learn.split(retina_net_split).
This converts the learn variable into a ‘NoneType’, whereas just before it has a type of ‘fastai.basic_train.Learner’.

I’ve looked in the changelog of fast.ai and see no mention of the retina_net_split so I am unsure why this is now broken in 1.0.41.

Any ideas how to fix this? Thank you!

UPDATE: Okay so I think I fixed the above issue? instead of
learn = learn.split(retina_net_split)
I used
learn.split(retina_net_split)
Which works for some reason. Still not sure if this is in place or not but it does allow me to begin training, which is when I run into another error. This time it is from the source code it seems, so I am completely unsure how to fix this one. Trying to run
learn.freeze()
learn.lr_find()

Throws this TypeError from the ‘forward()’ method.

TypeError                                 Traceback (most recent call last)
<ipython-input-48-c232684d32d4> in <module>
      1 learn.freeze()
----> 2 learn.lr_find()

~/conda/lib/python3.7/site-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, **kwargs)
     30     cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div)
     31     a = int(np.ceil(num_it/len(learn.data.train_dl)))
---> 32     learn.fit(a, start_lr, callbacks=[cb], **kwargs)
     33 
     34 def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner:

~/conda/lib/python3.7/site-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    172         callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
    173         fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
--> 174             callbacks=self.callbacks+callbacks)
    175 
    176     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

~/conda/lib/python3.7/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     94     except Exception as e:
     95         exception = e
---> 96         raise e
     97     finally: cb_handler.on_train_end(exception)
     98 

~/conda/lib/python3.7/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     84             for xb,yb in progress_bar(data.train_dl, parent=pbar):
     85                 xb, yb = cb_handler.on_batch_begin(xb, yb)
---> 86                 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     87                 if cb_handler.on_batch_end(loss): break
     88 

~/conda/lib/python3.7/site-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     21 
     22     if not loss_func: return to_detach(out), yb[0].detach()
---> 23     loss = loss_func(out, *yb)
     24 
     25     if opt is not None:

~/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

TypeError: forward() missing 1 required positional argument: 'clas_tgts'

Is this fixable?