Exporting a model trained with Fastai

Hi guys!
I am new to AI with Python and have trained a model with Unet18. This is the model:
1-Transfer learning:
learn = unet_learner(data, models.resnet18, metrics=metrics, wd=wd)
“Transfer learning was successful”.

2-The training of the model
lr =2e-3
learn.fit_one_cycle(10, slice(lr), pct_start=0.8)
learn.unfreeze()
start =5e-06
end = 4e-05
lrs = slice(start,end)
learn.fit_one_cycle(12, lrs, pct_start=0.8)
“The training of the model was also successful.”

3-Save, load and visualize model:
learn.save(’/content/gdrive/MyDrive/camvid/Stage_15’)
learn.load(’/content/gdrive/MyDrive/camvid/Stage_15’);
learn.show_results(rows=3, figsize=(20,20))

“until now everything is always good”

4-Export the model:
learn.export(“Unet18_Model”)

I try to export the model for a test, but I get the following error message:

AttributeError Traceback (most recent call last)
in ()
----> 1 learn.export(“Unet18_Modell.pkl”)

3 frames
/usr/local/lib/python3.7/dist-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol)
482 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol)
483 pickler.persistent_id = persistent_id
→ 484 pickler.dump(obj)
485 data_value = data_buf.getvalue()
486 zip_file.write_record(‘data.pkl’, data_value, len(data_value))

AttributeError: Can’t pickle local object ‘DynamicUnet.init..’

Does anyone know what the problem is… And how to fix it?
Thanks

Could you share your whole code on how you are generating the dataloaders ?
unet_learner(data, models.resnet18, metrics=metrics, wd=wd)

Hello Msivanes
Thank you very much for your response.
I have found the bug.
This is my dataloader:

src_size = np.array(mask.shape[1:])
src_size, mask.data

(array([1280, 1918]), tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]]])

“Since my pictures were too big, I divided them by 2, and this is the result”
size = src_size // 2
size, mask.data
(array([640, 959]), tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]]])

src = (SegmentationItemList.from_folder(path_img)
.split_by_fname_file(’…/valid.txt’)
.label_from_func(get_y_fn, classes=codes))

data = (src.transform(get_transforms(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))

Solution: Another had the same problem and found this:

" I have pinpointed the source for this bug! So apparently, the size in the ‘dataloader.transform()’ has to be an even number (i.e. 28, 128, etc).

Otherwise if you set dataloader.transform(size=129) or 29 or any other odd number, the error message `AttributeError: Can’t pickle local object ‘DynamicUnet.init…’ will appear. (It took me quite a while to pinpoint the source of this error …).

I feel it seems to be related to PyTorch’s pickle behavior and (maybe) the resampling process during the dataloader.transform ? But I’m curious if someone can point out the exact reason why that’s happening…

DynamicUnet should have input sizes that are a multiple of 32.
"
I adjusted my inputs and it worked:

size = (size *3) // 5
print(“size_DynamicUnet_Fehler = ({}, {})”.format(size[0], size[1]))
print("\n")
if size[0]%2:
size[0] = size[0] + 1
else:
size[0] = size[0]
if size[1]%2:
size[1] = size[1] + 1
else:
size[1] = size[1]
print(“size_DynamicUnet_gut= ({}, {})”.format(size[0], size[1]))
print("\n")
size, mask.data

size_DynamicUnet_Fehler= (768, 1150)

size_DynamicUnet_gut= (768, 1150)

(array([ 768, 1150]), tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]]]))

If one of the data sizes is odd, you will get an error message
for example:

(array([640, 959]), tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]]])