I’m working on deploying every model
The Render walkthrough is a great way to deploy most every type of vision model I’ve found, as they all rely on the same bit of boilerplate code. Let’s take a look at what would be different from our analyze function between the two.
First, single classification based on the example from the course:
@app.route('/analyze', methods=["POST"])
data = await request.form()
bytes = await (data['file'].read())
pred = learn.predict(bytes)[0]
return JSONResponse({'result':str(pred)})
Now, how do we modify this for multi-label? The only thing that needs to be changed is what happens with our predictions, as our multi-labels (From the planets example) now have a _. All we have to do is something like so:
for p in pred: p.replace('_', ' ')
for p in pred: p.title()
if pred == '[]':
pred = 'Could not recognize any classes, perhaps try another photo?'
return JSONResponse({
'result' : pred
})
All we changed was making it more readable for the end product. This is using Starlette based on the Render example: Documentation tutorial code
As you can see, the predict function of fastai is what makes this possible 