Learner.export() throws OverflowError exception

Hi,

When I am trying to export a learner using learner.export(). It throws an exception with the following error.

OverflowError: cannot serialize a string larger than 4GiB

I am using fastai 1.0.42. Any idea why is this happening?

Regards,
Nisar

I am able to get it working by changing the export method source code ( passing pickle_protocol=4 to the torch.save() function).

But the resulting file has a size of 6.2GB.

I was wondering what made the file size this huge. What part of the saved state is causing this behaviour?

We can’t answer that question without knowing everything about your Learner (how it was created, with which model and data).

The model is a customized version of Seq2Seq from this notebook https://github.com/fastai/fastai/blob/master/courses/dl2/translate.ipynb

Trying to apply it for time series data. Specifically stock prices with close to 14 features and ~2M datapoints. I had to create a custom ItemList, DataBunch for this dataset. Input sequence length 50 and target sequence length 10.

Used the RNNLearner from fastai.text to combine the data and model to create the learner.

Generally, where should i look in the state dict that is saved to know more about this?

Thanks in advance.

Just look in your state file the keys that are very heavy. If it’s the model, there is nothing much you can do, but 6GB is kind of a huge model.
My guess is you may have either callbacks of processors with a very heavy state (another reference to the model maybe?). Also make sure that any callback that uses the Learner is a LearnerCallback or it may try to save the learner in its internal state (with aaaaall the data and the model).

I do not have any callbacks in place. in fact , i had to remove the RNNTrainer callback from the list of callbacks to get it working( dont really know whats the issue there).

However, I do use a custom PreProcessor to normalize the data. It’s the same as from Tabular except that instead of normalizing the dataframe, it calculates the mean and SD from the dataframe and directly apply the normalization on the underlying items

What kind of information about an object is stored in its state? Is it all instance variable and their values?

Thanks

You should see what’s saved by navigating your dictionary (you can use torch.load for loading it).
Normally it’s all the the attributes, so for your processor here it would be means and stds (which you want to be saved since you would need them to be applied to new data).