Deployment platform: Amazon SageMaker

Hi all, I have just created a demo application showing how to deploy your fastai2 model to Amazon SageMaker for running your model in production. The demo will allow you to train your fastai2 model on the notebook instance, upload the model artefacts to S3, test the model deployment locally on the notebook instance then deploy to SageMaker hosting services.

To run the demo follow the instructions outlined here:

  1. Create a SageMaker Notebook instance as outlined in the following post.

  2. Clone the example application by opening a terminal in your notebook instance and running the command: cd ~/SageMaker && git clone https://github.com/mattmcclean/fastai2-sagemaker-deployment-demo.git

  3. Open and run through the notebook named: fastai2_deploy_sagemaker_demo.ipynb

8 Likes

Wonderful :slight_smile:

Are you planning to do Lambda too, perchance?..

3 Likes

I am trying to deploy fastai trained model (build using wwf(walk with fastai), and time), to sagemaker endpoint.
I have also included in required libraries as requirements.txt.
But the model is not being able to loaded by load_learner:

This is my model-fn

# loads the model into memory from disk and returns it
def model_fn(model_dir):
  logger.info('model_fn')
  path_model = Path(model_dir)
  logger.debug(f'Loading model from path: {str(path_model/EXPORT_MODEL_NAME)}')
  print(path_model/EXPORT_MODEL_NAME)
  defaults.device = torch.device('cpu')
  print("Trying to Load Model::::")
  learn = load_learner(path_model/EXPORT_MODEL_NAME, cpu=True)
  print('MODEL-LOADED')
  logger.info('model loaded successfully')
  return learn

So, As “MODEL-LOADED” is not being printed, so the load_learner is not loading the model.
But I have check the installations by pip list, all required libs are there.
So, this is the problem…

Thanks…