Load Learner - AttributeError: Can't get attribute 'FlattenedLoss' on <module 'fastai.layers'

hi
i have same issue

I installed fastai on Ubuntu machine. I have a multi-category model trained on Colab. I copied this model to the local machine and tried loading the model:
learn_infer=load_learner(path/‘export.pkl’)
I am getting the following error:

AttributeError: Can’t get attribute ‘FlattenedLoss’ on <module ‘fastai.layers’ from ‘/home/lenovo/miniconda3/envs/fastai2/lib/python3.8/site-packages/fastai/layers.py’>
Is there an issue with my installation??

I am getting the exact same error. Has anyone figured this out? I am using FastAI version 2.0.11.

Here is how I exported my trained model:

conversionLearner = cnn_learner(data, models.resnet18)
conversionLearner.load(bestPth)
conversionLearner.export('../FastAITrainedModels/face.pkl')

Then, in another script I try to use it:

# Load CNN Module
time_start = time.time()
learner = load_learner("FastAITrainedModels/face.pkl")
print("Model Loading Time Elapsed: " + str(time.time() - time_start))

Traceback:

torch.device: cpu
Traceback (most recent call last):
  File "fastai_predict_test.py", line 17, in <module>
    learner = load_learner("./face.pkl")
  File "/usr/local/lib/python3.8/site-packages/fastai/learner.py", line 539, in load_learner
    res = torch.load(fname, map_location='cpu' if cpu else None)
  File "/usr/local/lib/python3.8/site-packages/torch/serialization.py", line 585, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.8/site-packages/torch/serialization.py", line 765, in _legacy_load
    result = unpickler.load()
AttributeError: Can't get attribute 'FlattenedLoss' on <module 'fastai.layers' from '/usr/local/lib/python3.8/site-packages/fastai/layers.py'>

Python 3 version:

Python 3.8.5 (default, Jul 21 2020, 10:48:26) 
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import fastai
>>> print(fastai.__version__)
2.0.11
1 Like

Just had this problem. It wasn’t happening some days ago. I’m using fastai 2.0.15 on Google Colab. Funny thing is, loading the model on Gradient with fastai 2.0.13 works just fine.

UPDATE:
I installed Fastai 2.0.13 on Colab too and it works just fine.

1 Like

yes, it works for me too

What happened is the losses were moved from layers to loss (as it functionally made more sense)

2 Likes

Just ran into this issue as well. @muellerzr what is fastai stance on semantic versioning? As this is a breaking change, wouldn’t this consistute at the very least a minor version bump instead of patch?

Grateful for the fastai library. :hugs:

We have changelogs now (just it didn’t make it there right away, rarely things are missed):

re: semantic versioning, I can’t comment on that. :slight_smile:

2 Likes

@muellerzr First of all, thank you for the great work you put into the project!

The change: #2843) has the potential problem, that it can break the loading of a pickled model. This happens when the model was trained with a fastai version < 2.0.15 and then is loaded with the fastai >= 2.0.15.

If the model can be trained quickly, it can just be retrained with the newest fastai version.
However, some models may take days to train or one just wants to use a given model in production with the latest fastai version.

To be able to load a model, that was trained with fastai version < 2.0.15 with fastai version 2.0.15, I temporarily re-added the moved losses to the layers module like this:

import fastai.losses
import fastai.layers

fastai.layers.BaseLoss = fastai.losses.BaseLoss
fastai.layers.CrossEntropyLossFlat = fastai.losses.CrossEntropyLossFlat
fastai.layers.BCEWithLogitsLossFlat = fastai.losses.BCEWithLogitsLossFlat
fastai.layers.BCELossFlat = fastai.losses.BCELossFlat
fastai.layers.MSELossFlat = fastai.losses.MSELossFlat
fastai.layers.L1LossFlat = fastai.losses.L1LossFlat
fastai.layers.LabelSmoothingCrossEntropy = fastai.losses.LabelSmoothingCrossEntropy
fastai.layers.LabelSmoothingCrossEntropyFlat = fastai.losses.LabelSmoothingCrossEntropyFlat

When the model should be retrained and pickled again, one needs to make sure to use the moved losses in fastai.losses module.

EDIT:
I also ran into this error AttributeError: 'TypeDispatch' object has no attribute 'owner' so I ended up pinning some previous versions:
conda install -y -c fastai fastcore=1.0.12
conda install -y -c pytorch -c fastai fastai=2.0.13

3 Likes

Hi,

I’m also getting a different errror, when trying to load the exported model.

AttributeError: Can’t get attribute ‘xResNet’ on <module ‘main’>

xResNet is the name of the model. Also I get the same error in the notebook if I restart the notebook and try to load the saved model.

Thank you, I updated my requirements.txt and that fixed my issue.

Is there a simpler way to fix this? My issue was weird as the load learner worked during testing, but then after I attempted deployment I ran into the above errors, and I still do now that I returned to testing. Is there a way to reset the environment?

Pin your versions of fastai, fastcore, and torch you are using in your requirements.txt file (this is also just good practice in general btw)

1 Like

Thanks for the reply! My requirements text is the same as was in the FastAI book, with (fastai>=2.0.0). I assumed that training and loading would use the same version. In fact the “Voila” loader on my Binder app says the requirements are satisfied. However, I still run into the Attribute error mentioned above. I genuinely don’t know where the problem is as I have been using the latest FastAI library in all my work so far.

EDIT: The model works fine before I export it, it it after I export and try to load again where the error occurs.

Again that’s not pinning it, that’s providing a minimal version and above. You need to change that to something like (do not use this, this is just an example):

fastai == 2.0.3
fastcore == 1.0.4
pytorch == 1.4.0

You can figure out what versions you are using with !pip show fastai, etc for your various packages

1 Like

Thanks for the note on pinning. My FastAI version, nonetheless, is 2.1.4 (the latest according to the Github Changelogs). Should I be pinning the versions of FastCore and Pytorch as well then? And if so, where can I find the versions that are compatible with FastAI.

EDIT: FastAI is v2.1.4 and FastCore is 1.3.1 (the latest for both). Pytorch is not installed apparently, but my previously I did not have any problems without it.

EDIT (2): The error has arisen literally a few hours ago. I developed the model and loaded it with no problem all within the past 24 hours, so it’s not like I have been basing my work on older code necessarily, which is why this is strange and I am not sure how to resolve it.

EDIT (3): My error is as follows:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-5-7726f78efb69> in <module>
----> 1 learn_inf = load_learner(path/'export.pkl')

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in load_learner(fname, cpu)
    551     "Load a `Learner` object in `fname`, optionally putting it on the `cpu`"
    552     distrib_barrier()
--> 553     res = torch.load(fname, map_location='cpu' if cpu else None)
    554     if hasattr(res, 'to_fp32'): res = res.to_fp32()
    555     if cpu: res.dls.cpu()

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    592                     opened_file.seek(orig_position)
    593                     return torch.jit.load(opened_file)
--> 594                 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
    595         return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    596 

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
    851     unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
    852     unpickler.persistent_load = persistent_load
--> 853     result = unpickler.load()
    854 
    855     torch._utils._validate_loaded_sparse_tensors()

AttributeError: Can't get attribute 'CrossEntropyLossFlat' on <module 'fastai.layers' from '/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/layers.py'>

I have checked my dependencies, restarted my kernels multiple times, tried on other notebooks, all with the same error. I am pretty stumped and unable to find a solution for it online.

1 Like

My example above did, so yes you should.

Those are not your versions. These need to be based on the environment you trained your model in. The solution is to pin your versions based on what you trained in, what you deploy must match these versions otherwise things will break

This error shows that you did not train (when you exported your learner) on the latest version of fastai

2 Likes

Seems to have solved the problem (had to test it out on multiple places to make sure it worked 100%). I am not sure how my development code reverted to an older version but it does seem to be an inconsistency among dependencies like you mentioned.

1 Like

On using load_learner, I got the following error, any ideas on how to resolve the issue.

----> 1 learn_ett = load_learner(Path(’/baseline-b4’))

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in load_learner(fname, cpu)
551 "Load a Learner object in fname, optionally putting it on the cpu"
552 distrib_barrier()
–> 553 res = torch.load(fname, map_location=‘cpu’ if cpu else None)
554 if hasattr(res, ‘to_fp32’): res = res.to_fp32()
555 if cpu: res.dls.cpu()

/opt/conda/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
592 opened_file.seek(orig_position)
593 return torch.jit.load(opened_file)
–> 594 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
595 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
596

/opt/conda/lib/python3.7/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
851 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
852 unpickler.persistent_load = persistent_load
–> 853 result = unpickler.load()
854
855 torch._utils._validate_loaded_sparse_tensors()

AttributeError: Can’t get attribute ‘NonNativeMixedPrecision’ on <module ‘fastai.callback.fp16’ from ‘/opt/conda/lib/python3.7/site-packages/fastai/callback/fp16.py’>

AttributeError: ‘wrapper_descriptor’ object has no attribute ‘code

I get this error while loading the model in another notebook. please explain my mistake and a solution. thanks