Hi Fast.ai fans,
Wanted to share some instructions / sample on how to deploy your fast.ai2 model on Azure’s serverless product, called Azure Functions.
This procedure starts after you have trained your model in fast.ai and exported the model to a “export.pkl” file. Dependencies are kept to the minimum.
The steps for locally developing and testing the model prediction serving as Azure functions and publishing to Azure cloud are described on this Github Repo.
Listing the high level development steps and key pieces of the code to serve your models as web services here from the repo:
- Export your model in a binary form (like .pkl)
- Write the predict.py. Here is the basic example (for the Bear detector model – but code is quite agnostic as it just expects a export.pkl file). This is from the Github repo.
from datetime import datetime
from fastai2.vision.all import *
from urllib.request import urlopen
import logging
import os
import sys
import fastai2
path = Path()
learn_inf = load_learner(path/'classify/export.pkl')
def predict_image_from_url(image_url):
with urlopen(image_url) as testImage:
img = PILImage.create(testImage)
pred,pred_idx,probs = learn_inf.predict(img)
response = {
'created': datetime.utcnow().isoformat(),
'prediction': pred,
'confidence': probs[pred_idx.item()].item()
}
response
logging.info(f'returning {response}')
return response
if __name__ == '__main__':
print(predict_image_from_url(sys.argv[1]))
- Invoke this from __init__.py which receives a web request. Define a
requirements.txt
with all dependencies to run the above code.
import logging
import json
import azure.functions as func
from .predict import predict_image_from_url
def main(req: func.HttpRequest) -> func.HttpResponse:
image_url = req.params.get('img')
logging.info('Image URL received: ' + image_url)
results = predict_image_from_url(image_url)
headers = {
"Content-type": "application/json",
"Access-Control-Allow-Origin": "*"
}
return func.HttpResponse(json.dumps(results), headers = headers)
- Setup a dev environment to test this within Azure functions runtime running locally. Details in Github repo.
- Create the required Azure resources to deploy the model into – a storage account, an App Service Plan and a Azure Function App.
- Publish the code to Azure Functions App on the cloud and fetch the URL for the model predictor that us published as a web endpoint URL.
Now you can use any http client or browser to receive predictions by hitting the URL and passing in the input image URL (in this example, but you can modify it to input needs of your model).
Please try this out and send me your feedback.