Deployment Platform: Azure Functions

Hi Fast.ai fans,

Wanted to share some instructions / sample on how to deploy your fast.ai2 model on Azure’s serverless product, called Azure Functions.

This procedure starts after you have trained your model in fast.ai and exported the model to a “export.pkl” file. Dependencies are kept to the minimum.

The steps for locally developing and testing the model prediction serving as Azure functions and publishing to Azure cloud are described on this Github Repo.

Listing the high level development steps and key pieces of the code to serve your models as web services here from the repo:

  1. Export your model in a binary form (like .pkl)
  2. Write the predict.py. Here is the basic example (for the Bear detector model – but code is quite agnostic as it just expects a export.pkl file). This is from the Github repo.
from datetime import datetime
from fastai2.vision.all import *
from urllib.request import urlopen

import logging
import os
import sys
import fastai2

path = Path()
learn_inf = load_learner(path/'classify/export.pkl')

def predict_image_from_url(image_url):
    with urlopen(image_url) as testImage:
        img = PILImage.create(testImage)
        pred,pred_idx,probs = learn_inf.predict(img)

        response = {
            'created': datetime.utcnow().isoformat(),
            'prediction': pred,
            'confidence': probs[pred_idx.item()].item()
        }
        response
        logging.info(f'returning {response}')
        return response

if __name__ == '__main__':
    print(predict_image_from_url(sys.argv[1]))
  1. Invoke this from __init__.py which receives a web request. Define a requirements.txt with all dependencies to run the above code.
import logging
import json
import azure.functions as func

from .predict import predict_image_from_url

def main(req: func.HttpRequest) -> func.HttpResponse:
    image_url = req.params.get('img')
    logging.info('Image URL received: ' + image_url)

    results = predict_image_from_url(image_url)

    headers = {
        "Content-type": "application/json",
        "Access-Control-Allow-Origin": "*"
    }

    return func.HttpResponse(json.dumps(results), headers = headers)
  1. Setup a dev environment to test this within Azure functions runtime running locally. Details in Github repo.
  2. Create the required Azure resources to deploy the model into – a storage account, an App Service Plan and a Azure Function App.
  3. Publish the code to Azure Functions App on the cloud and fetch the URL for the model predictor that us published as a web endpoint URL.

Now you can use any http client or browser to receive predictions by hitting the URL and passing in the input image URL (in this example, but you can modify it to input needs of your model).

Please try this out and send me your feedback.

4 Likes

I have another sample to show how to export your models to ONNX and deploy the model serving to Azure Functions as a web service. Please check it out.

Once you export to ONNX, on the prediction system you just need the ONNX Runtime and a few dependencies (You dont need Pytorch or fastai libraries with all the dependencies on the target system resulting in smaller packages and ability to run on cheaper instances in Azure serverless and the ONNX runtime serving is usually much faster (latency wise) since it is lightweight and heavily optimized (and runtime itself is written in C++ with interfaces in multiple languages including Python, Java, C#, C++).

Here is the repo: https://github.com/gopitk/functions-deploy-onnx

1 Like

I am trying to do the above with an MS Azure account, on Azure Cloud shell

When I get to the step

pip install --no-cache-dir -r requirements.txt  

it downloads several packages, then I get the following error:

Downloading https://files.pythonhosted.org/packages/bc/8b/a7de205514540e99d3e00b35b01710345b1306a2a04d75ef6697f074f499/onnx-1.8.0.tar.gz (5.2MB)
100% |████████████████████████████████| 5.2MB 74.5MB/s
Installing build dependencies … done
Complete output from command python setup.py egg_info:
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File “”, line 1, in
File “/tmp/pip-install-sp9x_xm5/onnx/setup.py”, line 75, in
assert CMAKE, ‘Could not find “cmake” executable!’
AssertionError: Could not find “cmake” executable!


Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-install-sp9x_xm5/onnx/

Can anyone help? what am I doing wrong?