Pytorch checkpoints for using spot instances

Has anyone looked at using Pytorch checkpoints during training (like save model.state_dict() ) to allow using interruptable spot instances like the new Gradient° Low-Cost instances on Paperspace or AWS spot instances directly?

It seems like saving state and model parameters regularly during training could be a more economical way to train larger models. Would it be possible to have direct support for this feature in library?