Is it possible to have better backward compatibility for our old models with each new Fastai release?

I’m not talking about major releases like from 0.7 to 1.0 but like, say, 1.0.46 to 1.0.50. I usually don’t update my Fastai until I’m forced to due to some bug that causes fast.ai to crash or something. Then when I update my Fastai, I find out that the old models I’ve saved using learner.export() no longer work properly and sometimes can’t even be loaded.

Retraining my models a few times isn’t a big deal… but having to retrain the same models over and over again for the N-th time after having to upgrade fastai countless times gets old.

I still appreciate what this library offers but I’m starting to have second thoughts about using fastai in production when something like Keras proves to be a bit more resilient with respect to having to retrain models after version upgrades.

If I’m handling version upgrades incorrectly and all this retraining of models used in production is unnecessary, please let me know.

It might depend on your use-case, but I know there is the Open Neural Network Exchange format. You can convert PyTorch models (like those used in fastai) to the ONNX format.

I have never used this, but I know there is plenty of information out there on this…

1 Like

Good to know.

Also, I’m not sure if it’s too much to ask for but could we eventually have a LTS version of fastai for running it in production and not having to deal with frequent and drastic functionality and API changes, esp once it becomes used in production more? There should still be a version for bleeding edge features but also one for those of us who need to run it in production and not have to worry about old features breaking every time we get a bug fix.

If time and resources are an issue, I may be willing to volunteer to help contribute to maintain an LTS branch…