When I export the model with learner.export() in google colab, the file was saved as zip file in google drive. So, I can’t use the zip file in my machine. Plz help me to solve this problem. I want to reuse the pre trained model of google colab to my machine.
I don’t use Colab much, not sure I understand what you are saying.
When you say the .pkl file “becomes” zip, is it Colab/something that zipped it up for you to download to your machine? If you just open/extract that zip file, do you get back the .pkl file…?
Yijin
Thank you for answering, when I open the zip file, I have got a folder called export that contain a folder.
like this.
In pytorch 1.6, the default saving is .zip
. So torch.save() saves the output as a zipfile now by default.
I want to download export.pkl file from colab and used in my local machine. Last week, I can download export.pkl file not in zip format. But, this week, export.pkl file becomes zip file and I can’t used it on my local machine, like… model = load_learner(path,‘export.pkl’). So, Should I downgrade the pytorch to previous version on google colab?
RIP pickle. You won’t be missed.
There is an argument you can specify in torch.save _use_new_zipfile_serialization=False
to use old serialization.
I am having a similar issue. I am trying to load my trained model in Raspberry Pi. However, the .pkl file automatically become zip file type after I copy it to the Raspberry Pi. When I try to load the model, it cause a RuntimeError: “export.pkl is a zip archive”. Hoping someone can get a solution for this.
it’s right there above your post.
or upgrade pytorch on the Pi, either way. the problem is that you’re exporting with the new version and trying to load it with the old one.
I trained model on kaggle and used it in Raspberry Pi.
Have you solved the problem?
PyTorch 1.4 is the latest possible for Raspberry Pi. I will try the other solution. Thank you
no, use that flag when you export on pc/kaggle/colab then open it on Pi like you did before.
Yea, I am trying that. I will get back to you after I get the result.
I am just wonder if this method works on fastai v1, because I am currently using fastai 1.0.60.
It’s the pytorch version which is the issue. If your export is a zip file then you’re using the newer version.
I have read the basic_train.py which has the function block for learn.save() and learn.export(). I can see that learn.save is working depending on torch.save, so it is possible to pass _use_new_zipfile_serialization=False to the function. However, does learn.export depending on PyTorch as well? I don’t see anything related to PyTorch in the function block of learn.export()
Hi, any news on how to solve this? I managed to pass the flag
_use_new_zipfile_serialization=False
into the
torch.save
function, but can’t solve the export issue.