Deployment Platform: Amazon SageMaker

Here is the blog post just published today which talks about building fastai model with Amazon SageMaker

Thank you for your answer @amit_aec_it!
I followed the instructions exactly. However when trying to run the first cell, I already get an error. How is this possible?

ContextualVersionConflict: (requests 2.22.0 (/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages), Requirement.parse('requests<2.21,>=2.20.0'), {'sagemaker'})
1 Like

Hi Fabian. Sorry for the inconvenience. AWS just fixed the issue. Could you please delete your old cloud formation stack which will delete your sagemaker notebook instance. Post deletion please run the steps as mentioned in the blog. Let us know if that works.

Thank you for pointing me to the official instructions from AWS!
I was now able to get it to work out of the box :slight_smile:

One question though:
In their pets.py file they save the model via learn.save(model_path/f'{args.model_arch}') instead of exporting the model for inference.

When loading the model they take a tedious way of loading the model like so:

empty_data = ImageDataBunch.load_empty(path)
arch_name = os.path.splitext(os.path.split(glob.glob(f'{model_dir}/resnet*.pth')[0])[1])[0]
print(f'Model architecture is: {arch_name}')
arch = getattr(models, arch_name)    
learn = create_cnn(empty_data, arch, pretrained=False).load(path/f'{arch_name}')

Why are they not using learn.export() and for inference the load_learner() function?

I was able to identify, that the default deployment container in Sagemaker uses fastai v1.0.39. This version of fastai does not yet have the load_learner() function implemented.
Do you know how to set specific package versions through the Sagemaker Python SDK?
Best regards

1 Like

You are right Fabian. load_learner function comes with 1.0.48. Once AWS has pinned 1.0.48 or higher then you can use load_learner(). Now if you are asking how to use specific version higher than 1.0.39 then you have to follow BYOC (bring your own container) with fastai version installed approach…

1 Like

@amit_aec_it If you need to BYOC. I would like to recommend you guys BentoML

BentoML is an open source python framework for serving and operating machine learning models, making it easy to promote trained models into high performance prediction services.

After you spec out your machine learning service. It takes one command to deploy to Sagemaker.

You can check out the fastai example notebook here: https://colab.research.google.com/github/bentoml/gallery/blob/master/fast-ai/pet-classification/notebook.ipynb

And you can checkout an example notebook deploy to Sagemaker here: https://github.com/bentoml/BentoML/tree/master/examples/deploy-with-sagemaker

Let me know what you think. Love to get your feedback

Cheers

Bo

2 Likes

Thanks Bo. This is really very helpful. This makes life lot easier. Specially I like the Sage maker deployment part. I will test the same and get back to you.

I also found another way to update the fastai version:

Simply supply the PyTorchModel instance with a source_dir parameter.
Put a requirements.txt and your entry point file in the specified folder. In the requirements.txt list your desired fastai version and you`re good to go.

2 Likes

hey @matt.mcclean When opening the lesson 1 pets on sagemaker it says kernel not found The documentation says chose conda fastai but its not one of my options which one do i chose or is there another issue?

I’ve been yak shaving for 2 days and this was the key information i needed thank you!

Glad my comment could provide help, I was stuck some time as well. :slight_smile:

Hi @faib

Let’s say this is the code -

pets_estimator = PyTorch(entry_point='source/pets.py',
                         base_job_name='fastai-pets',
                         role=role,
                         framework_version='1.0.0',
                         train_instance_count=1,
                         train_instance_type='ml.p3.2xlarge') 

You are suggesting to add the source_dir parameter (“source”) and the requirement.txt file into it, right?

Can you also explain how would you mention the version in that and by doing so how it will work?

I was going through the PyTorch Sagemaker doc- https://sagemaker.readthedocs.io/en/stable/sagemaker.pytorch.html
and I found this explanation for source_dir-

source_dir ( str ) – Path (absolute or relative) to a directory with any other training source code dependencies aside from tne entry point file (default: None). Structure within this directory are preserved when training on Amazon SageMaker.

You can instantiate the PyTorch class like this

mail_model=PyTorchModel(model_data=model_artefact,
                        name=name,
                        role=role,
                        framework_version='1.1.0',
                        entry_point='serve.py',
                        predictor_cls=TextPredictor,
                        source_dir='my_src'
                       )

my_src is a folder containing serve.py and requirements.txt.

The requirements.txt file has the usual structure and contains e.g.:
fastai==1.0.52

Hope this helps.

Thanks @faib

I did exactly the same and it worked!

Regards

You need to use the kernel named "Python 3"

The notebook url is returning a 404. could you update please?

@tbass134 Yes. I will update it, the new link is https://github.com/bentoml/BentoML/tree/master/guides/deployment/deploy-with-sagemaker. I will also update my reply as well. Thanks for point out the dead link!

We update the example guide for sagemaker deployment. You can find the new guide at https://github.com/bentoml/BentoML/tree/master/guides/deployment/deploy-with-sagemaker

1 Like

Hi @faib, just following through with this and am stuck with the predictor_cls=TextPredictor.

What does your TextPredictor class look like?

my model accepts a single string of text but im not sure how to pass this for Sagemaker model’s .predict()