Can't run GPU-trained text model on CPU

I have some code that I adapted from the IMDB lesson code. I was able to train the ULMFit and classifier on an AWS EC2 GPU. I copied the model files (*.pth, *.pkl) down to my macbook and tried to load them in order to try doing predictions on my CPU-only machine.

Here is the relevant code for loading the models:

data_clas = load_data(path, fname=‘data_clas.pkl’, bs=16)
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5)
learn.load_encoder(‘third’)

It crashes on that last line with a runtime error. The message at the end of the stack trace says:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.

learn.load_encoder() does not accept a param called “map_location”, so I am not sure how to pass this to the underlying torch.load.

Any ideas?

I see my problem. Silly error in that I was careless in that I want

learn.load(‘third’)

to load the classifier, not

learn.load_encoder(‘third’)

which is for loading the ULM encoder.