First I want to thanks Jeremy Howard for creating fastai and sharing it with so much passion.
In order to understand more clearly the basics of fastai and python, I decided to create a rest api that communicate with a trained model to make a prediction on a single image.
Things are going pretty well and my api is almost working… but I still have some questions :
When I send a request to my server like this :
curl -X POST -F firstname.lastname@example.org 'http://localhost:5000/predict'
I’m sending directly an image to my api.
But the only way I found to transform an image is like this :
trn_tfms, val_tfms = tfms_from_model(arch, sz) image = val_tfms(open_image("PATH"))
So, how can I call val_tfms() with the image directly and not its path ?
To continue pushing forward, I’m testing the prediction with a hard coded test image. But when I’m calling :
preds = learn.predict_array(image[None])
I’m getting this error :
ValueError: Expected more than 1 value per channel when training, got input size [1, 1024]
But if I call learn.predict() before it’s working, why ?