Using a trained model in a REST api with flask

Hi everyone,

First I want to thanks Jeremy Howard for creating fastai and sharing it with so much passion.

In order to understand more clearly the basics of fastai and python, I decided to create a rest api that communicate with a trained model to make a prediction on a single image.

Things are going pretty well and my api is almost working… but I still have some questions :

When I send a request to my server like this :

curl -X POST -F image=@dog.jpg 'http://localhost:5000/predict'

I’m sending directly an image to my api.

But the only way I found to transform an image is like this :

trn_tfms, val_tfms = tfms_from_model(arch, sz)
image = val_tfms(open_image("PATH"))

So, how can I call val_tfms() with the image directly and not its path ?

To continue pushing forward, I’m testing the prediction with a hard coded test image. But when I’m calling :

preds = learn.predict_array(image[None])

I’m getting this error :

ValueError: Expected more than 1 value per channel when training, got input size [1, 1024]

But if I call learn.predict() before it’s working, why ?

Thank you

2 Likes

Hi,
there was an issue with the predict_array method (model wasn’t set in evaluation mode). Do a git pull origin master to update your local repository and the fastai library. It should work now.

1 Like

Yes it worked, thank you :slight_smile:

hi @polegar . I want to do the same thing as you and i want to ask you in able to run the prediction in a restful api did you installed all the fastai environment on your server or just pytorch ?

Hi,

Maybe there is a way only with pytorch, but to run my project I installed all the fastai environment.

By the way, I just created a new repo in GitHub (https://github.com/Polegar22/fastai_api/tree/master). Any feedbacks are appreciated !

hi @polegar . I took a look at your repo and i understand your approach but if we need to load the the whole data ,the trainded weights and the pretrained weights
data = ImageClassifierData.from_paths(“data/uglybeauty/”, bs=16, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=False)
learn.load(‘beautyDetector_all’)
It will make a huge and slow app in production . You tested it on localhost but in production there is a risk in memory capacity

Hi,

I’m not sure it will have a big impact in production because the method load_model() is only called once when you start the server. After that you only go threw the predict() method. And if you look into the directories in data/uglybeauty there is no images, only the directory structure.

But I’m wondering if their is another way to instanciate the learn object in order to load my model. The only way I found was to call :

data = ImageClassifierData.from_paths(“data/uglybeauty/”, bs=16, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=False)

Does anyone have an idea ?

1 Like

so you are passing an empty directory to ImageClassifierData.from_paths method ? and another question in order to install fastai environment in your project you said “copy the fastai librairy in the server folder” do you mean all the repo or just the fastai folder inside the repo ?

No it’s not empty, it contains my saved models and other stuff in the directory tmp. For more details you should look directly what’s in the directory : https://github.com/Polegar22/fastai_api/tree/master/server/data/uglybeauty.

I mean this folder : https://github.com/fastai/fastai/tree/master/fastai. But for this to work, you have to be sure that the fastai library is setup correctly in your environment by following the instructions here : https://github.com/fastai/fastai.

Edit : I just saw this sequence (https://youtu.be/blyXCk4sgEg?t=108) where Jeremy explains what to do with fastai library for your own projects. It’s basically what I wrote.

What format is the image in, when you say ‘the image directly’? open_image simply returns a numpy array containing the pixel values. So any approach that grabs the pixel values will work fine.

Hi,

I’m sending the image that way :

img = cv2.imread(IMAGE_PATH)
img_encoded = cv2.imencode(‘.jpg’, img)
response = requests.post(FASTAI_REST_API_URL, data=img_encoded.tostring(), headers=headers)

And I receive it that way :

nparr = np.fromstring(flask.request.data, np.uint8)
flags = cv2.IMREAD_UNCHANGED+cv2.IMREAD_ANYDEPTH+cv2.IMREAD_ANYCOLOR
image = cv2.cvtColor(cv2.imdecode(nparr, flags).astype(np.float32) / 255, cv2.COLOR_BGR2RGB)
image = val_tfms(image)

It’s working, but I don’t know if it’s the right way.

Looks fine to me. That’s basically the same code we have inside open_image.

1 Like

I am attempting to use this but with a plain website using canvas on the other side in javascript. I cant get the send over from javascript to match the simple format of your example python file. Do you know how OI would do similar with AJAX perchance?

can you throw some light on how actually did you communicate to fastai model with a rest api.Did you use flask?

Hello,

This part of the forum is kind of outdated since the release of the V3 of the courses. And this project is not based on the latest version of fastai.

You should check for this tutorial https://course.fast.ai/deployment_google_app_engine.html.