Problem with moving the computed network between computers


I’m trying to move a model from one machine to another and got stuck.

I have a Paperspace VM, where I calculate models for image classification (according to lesson 1).

I want to transfer the computed model to my local laptop (without GPU).

I transfered the model file (.h5) to my laptop, where I want to classify the images.

The code I use for the load is
PATH = “/home/pryb/data/brick/”

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=False)

When I call learn.load, I get:

Traceback (most recent call last):
File “/home/pryb/anaconda3/envs/fastai-cpu2/lib/python3.6/site-packages/torch/nn/modules/”, line 514, in load_ state_dict
RuntimeError: inconsistent tensor size, expected tensor [5 x 512] and src [10 x 512] to have the same number of elements, but got 2560 and 5120 elements respectively at /opt/conda/conda-bld/pytorch_1523244252089/work/torch/lib/TH/generic/THTe nsorCopy.c:86

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “”, line 79, in
(learn, val_tfms) = initNeuralNetwork()
File “”, line 27, in initNeuralNetwork
File “/home/pryb/fastai/fastai/”, line 107, in load
load_model(self.model, self.get_model_path(name))
File “/home/pryb/fastai/fastai/”, line 40, in load_model
File “/home/pryb/anaconda3/envs/fastai-cpu2/lib/python3.6/site-packages/torch/nn/modules/”, line 519, in load_ state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named 16.weight, whose dimensions in the model are torch.Size([5, 512]) and who se dimensions in the checkpoint are torch.Size([10, 512]).

I have the latest version from github on both machines. I have no problem with this code on the originating (paperspace) machine.

I tried to search the forum for the exception, but only found the advices about setting “precompute” to false - didn’t help. What am I doing wrong?


Just a wild guess on my part but the neural network topology is built from the paths data not during learn.load, so do you have the same data set in paperspace as locally, that is, do you have the same number of classes (subfolders) in your dataset. It seems like you might have 10 classes in the paperspace dataset but only 5 in your local dataset (or vice versa). As I said, it’s a wild guess.

Good guess. It was exactly the problem. I didn’t realize that and had older dataset locally.

Thanks a lot.

That’s great!