Segmentation learn.predict using Starlette & Render; show image mask as result rather than str(prediction)

Has anyone tried to create a web app for a segmentation problem?

I have successfully used: https://github.com/render-examples/fastai-v3 to do my own image classification problems.

But, with segmentation rather than use, per the above:

prediction = learn.predict(img)
return JSONResponse({'result': str(prediction)})

Which works and gives me i.e.:

Result = (ImageSegment (1, 256, 256), tensor([[[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]]), tensor([[[8.5772e-02, 8.4360e-02, 1.0073e-01, ..., 9.9956e-01, 9.9786e-01, 9.9232e-01], [1.1104e-01, 9.4421e-02, 1.2525e-01, ..., 9.9988e-01, 9.9905e-01, 9.9127e-01], [1.5099e-01, 1.4895e-01, 1.8742e-01, ..., 9.9993e-01, 9.9962e-01, 9.9481e-01], ..., [8.7420e-01, 9.2307e-01, 8.3781e-01, ..., 9.8641e-01, 9.8990e-01, 9.6547e-01], [9.1475e-01, 9.4999e-01, 9.1047e-01, ..., 9.8308e-01, 9.8245e-01, 9.4360e-01], [8.9638e-01, 9.1444e-01, 8.8966e-01, ..., 9.6006e-01, 9.3871e-01, 8.6142e-01]], [[9.1423e-01, 9.1564e-01, 8.9927e-01, ..., 4.4379e-04, 2.1372e-03, 7.6819e-03], [8.8896e-01, 9.0558e-01, 8.7475e-01, ..., 1.1997e-04, 9.5195e-04, 8.7291e-03], [8.4901e-01, 8.5105e-01, 8.1258e-01, ..., 6.8235e-05, 3.7505e-04, 5.1882e-03], ..., [1.2580e-01, 7.6933e-02, 1.6219e-01, ..., 1.3586e-02, 1.0098e-02, 3.4532e-02], [8.5250e-02, 5.0013e-02, 8.9535e-02, ..., 1.6916e-02, 1.7554e-02, 5.6399e-02], [1.0362e-01, 8.5556e-02, 1.1034e-01, ..., 3.9943e-02, 6.1291e-02, 1.3858e-01]]]))

I would rather show the segmented image. But, since the segmented image is an overlay when you run:

img.show(y=learn.predict(img)[0])

I would rather show just the segments (as a second image that then shows underneath the image that is uploaded). The uploaded image shows per the above Github repository. So, I just wrote the following, which works just fine and shows what I want in my jupyter notebook:

prediction = learn.predict(img)
think_np = np.array(prediction[1])
think_np.shape = (256,256)
think_np = think_np.astype(int)
think_np[think_np > 0] = 255
think_im = PilImage.fromarray((think_np).astype('uint8'), mode='L')

Granted my HTML, CSS, JSON being weak is a big understatement. But, I am trying to get ā€œthink_imā€ to show as ā€˜resultā€™ after hitting the analyze button, rather than (in replace of) showing str(prediction).

So, I replaced:

return JSONResponse({'result': str(prediction)})

With:

return JSONResponse({'result': think_im})

And then I didnā€™t change anything in the client.js (let alone yet the HMTL and CSS). But even before looking into those, just on the server.py getting the following error:

TypeError: Object of type 'Image' is not JSON serializable

I have been trying to research this, but not getting off the starting blocks. Appreciate any pointers.

4 Likes

If you want to return the image mask, I donā€™t think you can include it in a JSONResponse (since the image canā€™t be well represented in JSON) but you can use the more generic Response class. There might be a simpler way, but I only know how to return it as a byte array like so:

from io import BytesIO
from starlette.responses import Response

...your code...

img_bytes = BytesIO()
think_im.save(img_bytes, format='png')
return Response(img_bytes.getvalue(), media_type='image/png')

Hope that helps!

2 Likes

I served an image by saving it as a byte string and passing it to an HTML template

img_io = BytesIO()

PIL.Image.fromarray(im_data).save(img_io, 'PNG')

img_io.seek(0)

img_str = base64.b64encode(img_io.getvalue()).decode()
img_str = "data:image/png;base64," + img_str

return templates.TemplateResponse('output.html', {'request' : request, 'b64val' : img_str})

In your HTML you would have

        <input id="img_b64" type="hidden" name="img" value={{b64val}}>
        <div>
                <img id="myImage" class="image-display" alt="image">
        </div>
2 Likes

Had exactly this question about showing a segmentation result so thanks @drewbo @KarlH for the guidance! I combined your answers to return a base64 encoded img_str that I could directly put into <img src="...">:

In server.py:

outputs = learn.predict(img)
im = image2np(outputs[2].sigmoid())
resp_bytes = BytesIO()
PIL.Image.fromarray((im*255).astype('uint8')).save(resp_bytes, format='png')
img_str = base64.b64encode(resp_bytes.getvalue()).decode()
img_str = "data:image/png;base64," + img_str
return Response(img_str)

In client.js:

el("result-label").innerHTML = `<img src="${response}">`;

Working pretty well for me here: https://segment-tiles.onrender.com/

2 Likes

Glad to have helped.

One thing Iā€™ll note is that I originally displayed the image the same way you did but I had issues with larger images that created long data URIs. Some browsers (Chrome) truncate long data URIs automatically. I found that downloading the displayed image (a functionality my use case needed) via a standard HTML element failed when a long data URI was truncated by the browser. The configuration in my post above got around the problem. Not sure if thatā€™s relevant to your use case.

2 Likes

@daveluo Thank you for the solution that you have posted. Do you think you can share your related files for deploying the segmentation model on render?

Thank you

@jmstadt Hey, were you able to get the segmentation output in render?

Harshal, admit, I have not yet, I plan on trying the above, but had to work some other things. Sorry that I have not progressed to input. I got it to work on my local host with a Flask App, but have not been able to on a public host.

@jmstadt can you share the required files required to run the Flask App?

I can Harshal, it is in a notebook. Maybe if you get it running on render you can let me know:)

Also, I am not a very good coder so apologize for lack of elegance. But, it does work.

from fastai.vision import *
from fastai.metrics import error_rate
from flask import Flask, request, url_for, flash
from werkzeug import secure_filename
from flask import send_from_directory

import numpy as np
import os
from os import rename, listdir
from PIL import Image as PILImage

class SegLabelListCustom(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)

class SegItemListCustom(SegmentationItemList):
_label_cls = SegLabelListCustomBlockquote

path = ā€˜ā€™

learn = load_learner(path, ā€˜cell_export.pklā€™)

UPLOAD_FOLDER = ā€˜./upload_predictionsā€™
ALLOWED_EXTENSIONS = set([ā€˜jpgā€™, ā€˜pngā€™])

app = Flask(name)
app.config[ā€˜UPLOAD_FOLDERā€™] = UPLOAD_FOLDER

def allowed_file(filename):
return ā€˜.ā€™ in filename and filename.rsplit(ā€™.ā€™, 1)[1].lower() in ALLOWED_EXTENSIONS

@app.route(ā€™/ā€™, methods=[ā€˜GETā€™, ā€˜POSTā€™])
def upload_file():

if request.method == 'POST':
    # check if the post request has the file part
    if 'file' not in request.files:
        flash('No file part')
        return redirect(request.url)
    
    file = request.files['file']

    # if user does not select file, browser also
    # submit an empty part without filename
    if file.filename == '':
        flash('No selected file')
        return redirect(request.url)

    if file and allowed_file(file.filename):
        filename = secure_filename(file.filename)
        #filename = file.filename
        #file = file.resize((256,256))
        #file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
        print(filename)
        
        img = PILImage.open(file)
        img = img.resize((256,256), PILImage.ANTIALIAS)
        img = img.convert('RGB')
        img.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
        
        #filename = secure_filename(img.filename)
        
        extension = filename.rsplit('.', 1)[1].lower()
        print(extension)
        
        img.save(os.path.join(app.config['UPLOAD_FOLDER'], ('img_tfms.' + extension)))
        
        image = open_image(os.path.join(app.config['UPLOAD_FOLDER'], ('img_tfms.' + extension)))
        
        #image = open_image((img + '_tfms.png')
        
        #image = open_image(('tfm_image' + filename.rsplit('.', 1)[1]))
        #image = open_image(filename)
        print(image.shape)
        image_url = url_for('uploaded_file', filename=filename)
        think = learn.predict(image)
        think_np = np.array(think[1])
        think_np.shape = (256,256)
        think_np = think_np.astype(int)
        think_np[think_np > 0] = 255
        new_file = filename.rsplit('.', 1)[0]
        #print(new_file)
        think_im = PILImage.fromarray((think_np).astype('uint8'), mode='L')
        #think_im.save(os.path.join(app.config['UPLOAD_FOLDER'], 'think2_im.png'))
        think_im.save(os.path.join(app.config['UPLOAD_FOLDER'], (new_file + '_pred.png')))
        #think_im_url = url_for('uploaded_file', filename='think2_im.png')
        think_im_url = url_for('uploaded_file', filename=(new_file + '_pred.png'))
        #print(think_im_url)
        #image.show(y=learn.predict(image)[0])
        return '''<h1>The cell image is:</h1>
        <img src= "{}" height = "85" width="200"/>
        <h1>The cell nucleii segmentation is:</h1>
        <img src= "{}" height = "85" width="200"/>'''.format(image_url, think_im_url)
    
list(map(os.unlink, (os.path.join(app.config['UPLOAD_FOLDER'],f) for f in os.listdir(app.config['UPLOAD_FOLDER']))) )


return '''
<!doctype html>
<title>Upload new File</title>
<h1>Upload an image of Cells and Nucleii</h1>
<form method=post enctype=multipart/form-data>
  <input type=file name=file>
  <input type=submit value=Upload>
</form>
'''

@app.route(ā€™/uploads/ā€™)
def uploaded_file(filename):
return send_from_directory(app.config[ā€˜UPLOAD_FOLDERā€™], filename)

#When running on chrome hit CTRL-SHIFT-I then right click on reload circle Empty Cache and Hard Reload
if name == ā€˜mainā€™:
app.run(port=5000, debug=False)

1 Like

@jmstadt Thank you for your swift response. Instead of sharing your code here can you upload the required files it on your github maybe and share the link with me. The uploaded code here is not exactly readable.

Thank you.

Sorry for reviving this topic, but Iā€™m using the same exemple to deploy my segmentation model but itā€™s not working. There is a problem with the load_learner, it always show the same error:
learn = load_learner(path, export_file_name)
state = torch.load(source, map_location=ā€˜cpuā€™) if defaults.device == torch.device(ā€˜cpuā€™) else torch.load(source)
I changed everything like it was said, but Iā€™m having this problem, if someone can help me it would be great.