State=torch.load() is causing issues in load_learner()

I need help! probably a simple fix, but I have been stuck for weeks. The issue is related to load_learner()

I am deploying a machine learning model using Flask with load_learner(“path to model dir”) to apply the model. When deployed on PythonAnywhere I get this error:

raise AttributeError("’{}’ object has no attribute ‘{}’".format(
AttributeError: ‘Sequential’ object has no attribute ‘pop’

which is a direct result of… model = state.pop(‘model’) …from load_learner.

Now, here is the real catcher that I don’t understand. When I run the same code on google app engine locally, it works.

I believe that the difference is in the ‘state’ variable assigned within load learner:

state = torch.load(source, map_location=‘cpu’) if defaults.device == torch.device(‘cpu’) else torch.load(source)

When I print ‘state’ on python anywhere I get this:

Sequential(#012 (0): Sequential(#012 (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)#012 (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)#012 (2): ReLU(inplace=True)#012 (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)#012 (4): Sequential(#012 (0): BasicBlock(#012 (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)#012 (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)#012 (relu): ReLU(inplace=True)#012 (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)#012 (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)#012 )#012 (1): BasicBlock(#012 (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)#012 (bn1): BatchNorm2d(64, eps=1e

But when I print ‘state’ on google app engine I get this:

‘opt_func’: functools.partial(<class ‘torch.optim.adam.Adam’>, betas=(0.9, 0.99)), ‘loss_func’: LabelSmoothingCrossEntropy(), ‘metrics’: [<function error_rate at 0x7f3102789c80>], ‘true_wd’: True, ‘bn_wd’: True, ‘wd’: 0.1, ‘train_bn’: True, ‘model_dir’: ‘models’, ‘callback_fns’: [functools.partial(<class ‘fastai.basic_train.Recorder’>, add_time=True, silent=False), <class ‘fastai.train.ShowGraph’>, functools.partial(<class ‘fastai.callbacks.tracker.SaveModelCallback’>, monitor=‘valid_loss’, mode=‘auto’, name=‘seed_1’), functools.partial(<class ‘fastai.callbacks.tracker.EarlyStoppingCallback’>, monitor=‘valid_loss’, min_delta=0.001, patience=100), functools.partial(<class ‘fastai.callbacks.mixup.MixUpCallback’>, alpha=0.4, stack_x=False, stack_y=True)], ‘cb_state’: {}, ‘model’: Sequential(…many more lines of code…) ‘data’: {‘x_cls’: <class ‘fastai.vision.data.ImageList’>, ‘x_proc’: [], ‘y_cls’: <class ‘fastai.data_block.CategoryList’>, ‘y_proc’: [<fastai.data_block.CategoryProcessor object at 0x7f3175aa
32e8>], ‘tfms’: [RandTransform(tfm=TfmCrop (crop_pad), kwargs={}, p=1.0, resolved={‘padding_mode’: ‘reflection’, ‘row_pct’: 0.5, ‘col_pct’: 0.5}, do_run=True, is_random=True, use_on_y=True)], ‘t
fm_y’: False, ‘tfmargs’: {‘size’: 80}, ‘tfms_y’: [RandTransform(tfm=TfmCrop (crop_pad), kwargs={}, p=1.0, resolved={‘padding_mode’: ‘reflection’, ‘row_pct’: 0.5, ‘col_pct’: 0.5}, do_run=True, is
_random=True, use_on_y=True)], ‘tfmargs_y’: {‘size’: 80}, ‘normalize’: {‘mean’: tensor([0.5171, 0.5014, 0.4180]), ‘std’: tensor([0.3116, 0.3022, 0.2467]), ‘do_x’: True, ‘do_y’: False}}, ‘cls’: <
class ‘fastai.basic_train.Learner’>}

It seems like the state in pythonanywhere is lacking several key values.

this didn’t fix it: defaults.device = torch.device(‘cpu’)

Can anyone explain what is causing this difference in state from torch.load() and how I can fix it to work in pythonanywhere?