@henry.humadi graciously provided me with following following solution:
“Decouple your training from the prediction. Meaning you can train your model wherever you want then pickle your model and save it on s3. Then consume it in your production environment.
This way you can setup the machine however you want without affecting your production environment”
The model I am using is very large (about 2GB) and I am concerned with performance speed.
Would welcome any suggestions