Variable.backward() is not stopping

I am facing two issues :

  1. Not able to load the exported learner for TextClassificationInterpretation.from_learner(learn) .
    Getting index out of range error
  2. In this function
    def intrinsic_attention(self, text:str, class_id:int=None):
    “”“Calculate the intrinsic attention of the input w.r.t to an output class_id, or the classification given by the model if None.
    For reference, see the Sequential Jacobian session at https://www.cs.toronto.edu/~graves/preprint.pdf
    “””
    self.model.train()
    eval_dropouts(self.model)
    self.model.zero_grad()
    self.model.reset()
    ids = self.data.one_item(text)[0]
    emb = self.model[0].module.encoder(ids).detach().requires_grad
    (True)
    lstm_output = self.model[0].module(emb, from_embeddings=True)
    self.model.eval()
    cl = self.model[1](lstm_output + (torch.zeros_like(ids).byte(),))[0].softmax(dim=-1)
    if class_id is None: class_id = cl.argmax()
    cl[0][class_id].backward()
    attn = emb.grad.squeeze().abs().sum(dim=-1)
    attn /= attn.max()
    tokens = self.data.single_ds.reconstruct(ids[0].cpu())
    return tokens, attn
    cl[0][class_id].backward() is not stopping. whenever I run this it gives the output and freezes on that.
    I am using windows server to execute all this.