"can't optimize non-leaf Tensor" error with learn.fit_one_cycle after running/plotting lr_find a second time

Getting a can't optimize non-leaf Tensor exception … but only after running/plottin lr_find a second time after unfreezing. If I don’t run/plot lr_find after unfreezing, all run fine.

Here is the code …

And here’s the error …

I can reproduce the bug, it seems to be linked with WeightDropout. Will investigate more tomorrow.

PS: next time sharing a gist of your notebook would help us a lot more than a screenshot since I had to retype everything in your picture :wink:

Yikes, sorry about that. Will make it more copyable in the future.

That was a nasty one! I think it’s fixed by this commit but let me know if it doesn’t work.

Basically the weights of the LSTM to which dropout is applied turn into a tensor during training (and not a parameter anymore) but during validation, they turn back into a parameter (because no dropout is applied). The problem is that LR Finder interrupts during training while skipping validation, so those weights stayed as a tensor and pytorch didn’t want to optimize them. I added the reset call at the end of a LR Finder to make sure they were put back to a true parameter.

2 Likes

So far so good!

And thanks for debugging this … I have a feeling it would have taken me quite a bit longer to 1) figure out what was happening and 2) determine what to do about it.

Anyways, let me know if you need any other help on text package related to-dos. Porting work using the deprecated code based to the new fastai framework and outside of having a “backwards” pre-trained model and being able to specify where pre-trained models/vocabs are downloaded … everything else looks to be there.

Its interesting that I didn’t run into this bug when I ran the LM code both for IMDB and my own custom dataset. I wonder why that is.