Fastai v2 text

I haven’t seen that error before.

Personally, I use learn.save when I want to be able to continue training from that point. I use learn.export when I just want to do inference on another machine.

That said, I don’t think using either one would have an effect on inference times. If you’re doing inference in a batch, you can use learn.get_preds instead of learn.predict. Otherwise, you’re probably just seeing the difference between predicting on a GPU vs a CPU.

If you really want to test, you can force your GPU machine to run inference using only the CPU, and see how long it takes. I haven’t tried this on fastai2 yet, but this command might still work: