Fastai v2 text

Redoing everything seemed to help, the key is to make sure of consistency, same vocab, same encoder etc.,

I’m able to deploy on a separate machine, but inference takes a mighty long time – I’m using learn.save() as opposed to learn.export(). I was getting an error similar to here. The suggested solution was to upgrade the library, which I did ( now at fastai2 (0.0.16) and fastcore (0.1.16) ).
I now get this error instead:

learn.export()

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-26-fa5b61306ef3> in <module>
----> 1 learn.export()

~/.local/lib/python3.6/site-packages/fastai2/learner.py in export(self, fname, pickle_protocol)
    497         #To avoid the warning that come from PyTorch about model not being checked
    498         warnings.simplefilter("ignore")
--> 499         torch.save(self, self.path/fname, pickle_protocol=pickle_protocol)
    500     self.create_opt()
    501     if state is not None: self.opt.load_state_dict(state)

~/.local/lib/python3.6/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization)
    368 
    369     with _open_file_like(f, 'wb') as opened_file:
--> 370         _legacy_save(obj, opened_file, pickle_module, pickle_protocol)
    371 
    372 

~/.local/lib/python3.6/site-packages/torch/serialization.py in _legacy_save(obj, f, pickle_module, pickle_protocol)
    441     pickler = pickle_module.Pickler(f, protocol=pickle_protocol)
    442     pickler.persistent_id = persistent_id
--> 443     pickler.dump(obj)
    444 
    445     serialized_storage_keys = sorted(serialized_storages.keys())

TypeError: can't pickle SwigPyObject objects

How can I address this?