Text model deployment (Render). Stuck creating Text Classification Interpretation

Hi,

I trained a text model on fastai1 using Google Colab, made some predictions and also some interpretations (using Text Classification Interpretation).

Now I’m trying to deploy the model to Render following this example. I already deployed it successfully into my local machine and it predicts correctly as well.

However, when I add this txt_ci = TextClassificationInterpretation.from_learner(learn_c) to the code code and try to run the local server again, the execution freezes.

Any idea what might be going on?

Thanks.

PS. The model was trained with GPU and my local environment doesn’t have one.