"AttributeError: LSTM object has no attribute flat_weights_names"

Hi all,

after creating a text classifier and exporting it (learn.export), i am reloading it into my python webapp (flask framework) with learn = load_learner(), as described in https://docs.fast.ai/tutorial.inference.html#Language-modelling.

However, when attempting to make a prediction i am receiving an error message

“AttributeError: LSTM object has no attribute ‘_flat_weights_names’”

The prediction works in my jupyter notebook.

I’m grateful for any advice, thanks!

1 Like

I am haveing the same error today when I was trying load my previous perfectly run model today.

I am pretty sure that this has something to do with flask.

I created a new directory with a new venv and just torch and fastai:

mkdir('dostuff')
python3 -m venv env
source env/bin/activate
pip install torch torchvision
pip install fastai
touch app.py

Then i put the exported model in a model directory. In app.py i just had

from fastai.text import *
learn = load_learner('model')
print(learn.predict('example'))

In the terminal, this created the right output.

Then however i added flask to the installation (pip install flask). I did not call it from app.py, still then the terminal gives me the same error output as before in the web app.

So I assume that it is some compatibility issue with flask?

I had the same problem few days ago. In my case downgrading torch to 1.3.0 version helped.

Best regards

1 Like

I had a similar issue today. After doing a fresh install of fastai v1 (version 1.0.60 and pytorch 1.4.0) the imdb notebook from course-v3 lesson 3 failed on this cell:

data = TextClasDataBunch.from_csv(path, 'texts.csv')
data.show_batch()

with the error message:

ValueError: Value must be a nonnegative integer or None

The same code ran without problem in an old environment (fastai version 1.0.57 and pytorch 1.2.0), but not with fastai 1.0.60 and pytorch 1.2.0.

It looks like they changed the code of LSTM so you have to re-export your model after training (or loading saved weights) with PyTorch 1.4.0. This on their side, we can’t do anything about it inside fastai.

Got the same error when trying to deploy a trained model from FastAI (which I trained in Google Colab) with Flask. For me it worked when I did “pip install torch==1.2.0” and “pip install fastai==1.0.57” in my virtual environment on my laptop. Hope it helps

2 Likes

Yeah I downgraded pytorch to 1.3.1 . It worked for me. Not sure why this issue is happening with pytorch 1.4.0

Niels, thank you. Downgrading from Torch 1.4.0 to 1.2.0 fixed the LSTM issue for me too. I am also going from Colab to my laptop.

1 Like

For anyone having issues with pipenv, I also had to pin torchvision:

torch = "==1.2.0"
torchvision = "==0.4.0"
data.show_batch()

with the error message:

ValueError: Value must be a nonnegative integer or None

My data.show_batch() also not working anymore after pytorch got upgraded this morning. Was working before and now has the ValueError:

=== Software ===
python : 3.7.6
fastai : 1.0.60
fastprogress : 0.2.2
torch : 1.4.0
nvidia driver : 435.21
torch cuda : 10.1 / is available
torch cudnn : 7603 / is enabled

Skipping the show for now - guess I find out were the rest of the deaths occur.

I have the same error when trying to adopt this code to my own data: https://analyticsindiamag.com/a-hands-on-guide-to-regression-with-fast-ai/

The show_batch() error is a pandas issues it seems, and it’s already fixed for next release: https://github.com/fastai/fastai/pull/2484/commits

(Also, it was not related to the LSTM-issues in the thread - sorry for spamming.)