Loading a saved model

Is there any way to load the weights from a multi-label model to a singel-label model?

what image size to use when loading a model which was trained on increasing sizes using learn.set_data(set_data(sz, bs))

I am following lesson 4 for text classification. I am using pretrained language model for this.
When I am loading encoder I am getting this error: @ramesh @jeremy @vikbehal

m3.load_encoder(f'adam1_enc')

     RuntimeError                              Traceback (most recent call last)
~/anaconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
    513                 try:
--> 514                     own_state[name].copy_(param)
    515                 except Exception:

RuntimeError: invalid argument 2: sizes do not match at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCTensorCopy.c:51

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
<ipython-input-17-f550326d29d6> in <module>
      1 # this notebook has a mess of some things going under 'all/' others not, so a little hack here
      2 #!ln -sf ../all/models/adam3_20_enc.h5 models/adam3_20_enc.h5
----> 3 m3.load_encoder(f'adam1_enc')
      4 m3.clip=25.
      5 lrs=np.array([1e-4,1e-3,1e-3,1e-2,3e-2])

~/Downloads/fastai/courses/dl1/fastai/nlp.py in load_encoder(self, name)
    164     def save_encoder(self, name): save_model(self.model[0], self.get_model_path(name))
    165 
--> 166     def load_encoder(self, name): load_model(self.model[0], self.get_model_path(name))
    167 
    168 

~/Downloads/fastai/courses/dl1/fastai/torch_imports.py in load_model(m, p)
     38             if n+'_raw' not in sd: sd[n+'_raw'] = sd[n]
     39             del sd[n]
---> 40     m.load_state_dict(sd)
     41 
     42 def load_pre(pre, f, fn):

~/anaconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
    517                                        'whose dimensions in the model are {} and '
    518                                        'whose dimensions in the checkpoint are {}.'
--> 519                                        .format(name, own_state[name].size(), param.size()))
    520             elif strict:
    521                 raise KeyError('unexpected key "{}" in state_dict'

RuntimeError: While copying the parameter named encoder.weight, whose dimensions in the model are torch.Size([67979, 300]) and whose dimensions in the checkpoint are torch.Size([21821, 300]).

You need to get_data of size 64 bits first. The size of the images in data module while loading the module should be same as when saving the model.

does saved model also contain loss and metrics recorded during training? i mean can i use plot.recorder.plot_losses after loading the model?
cause i am using colab and loading and disconnecting causes issues.