There are two ways of saving a PyToch model:
torch.save(model, PATH)
which essentially pickles it, and
torch.save(model.state_dict(), PATH)
which stores only the model’s parameters as a Python ordered dictionary.
In the PyTorch documentation they mention that:
When saving a model for inference, it is only necessary to save the trained model’s learned parameters. Saving the model’s state_dict with the
torch.save()
function will give you the most flexibility for restoring the model later, which is why it is the recommended method for saving models.
But why is that? What do they mean by flexibility? Could you provide some examples?