Hey Jason,
When you export a model (without the optimizer state), you basically need to save all the weights to disk. You can do a quick ballpark estimation of the expected file size depending on the number of model parameters (and assuming float32 = 4 bytes for each), but it’s likely to be several hundred megabytes.
Saying that it’s “prohibitive” to deployment in certain applications may be true for your use case, but that means that you likely cannot use neural networks at all (or you have to use specific architectures designed to be as light-weight as possible, which usually also impacts accuracy). Another option is to see what exactly prevents you from being able to deploy this model and trying to solve that problem.