Pickle error while exporting Dynamic Unet with modified resnet with first stride as 1

I have been using unet_learner with resnet34 encoder, the only change i did was to change first stride to 1 (default=2). With this I could achieve a much better IOU score, the idea came in my mind since i was working on solar panel segmentation and the object size is very small in image.

I could successfully train and test the model with this.

Now while deploying to production i wanted to export the learner and use for new test images. While export I am getting this error, however i checked with models.resnet34 (default first stride=2) this issue does not appear. Please suggest how should i solve this issue:


AttributeError Traceback (most recent call last)
in
----> 1 learn.export(‘u34-256-stg2-sk-lsaug-fph.pkl’)

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/fastai/basic_train.py in export(self, file, destroy)
240 state[‘data’] = self.data.valid_ds.get_state(**xtra)
241 state[‘cls’] = self.class
–> 242 try_save(state, self.path, file)
243 if destroy: self.destroy()
244

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/fastai/torch_core.py in try_save(state, path, file)
410 def try_save(state:Dict, path:Path=None, file:PathLikeOrBinaryStream=None):
411 target = open(path/file, ‘wb’) if is_pathlike(file) else file
–> 412 try: torch.save(state, target)
413 except OSError as e:
414 raise Exception(f"{e}\n Can’t write {path/file}. Pass an absolute writable pathlib obj fname.")

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol)
222 >>> torch.save(x, buffer)
223 “”"
–> 224 return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/torch/serialization.py in _with_file_like(f, mode, body)
147 f = open(f, mode)
148 try:
–> 149 return body(f)
150 finally:
151 if new_fd:

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/torch/serialization.py in (f)
222 >>> torch.save(x, buffer)
223 “”"
–> 224 return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
225
226

~/.conda/envs/fastai_v1/lib/python3.6/site-packages/torch/serialization.py in _save(obj, f, pickle_module, pickle_protocol)![resnet|690x392,75%]
295 pickler = pickle_module.Pickler(f, protocol=pickle_protocol)
296 pickler.persistent_id = persistent_id
–> 297 pickler.dump(obj)
298
299 serialized_storage_keys = sorted(serialized_storages.keys())

AttributeError: Can’t pickle local object ‘DynamicUnet.init..’
@sgugger or @kcturgutlu could you please suggest.

You need to give a new name to your custom layers and put them in a module, so you can properly pickle and unpickle them in your production environment.

@sgugger: I have not created any custom layer, I only copied resnet code from torchvision.models and changed the stride. Could you please elaborate more on the proposed solution.

That is called creating a custom layer, then you end up having this layer with the same name as a fastai layer, so it confuses pickle.