TF Lite actually uses low level APIs. So it uses the NNAPI on Android and CoreML on iOS.
So can PyTorch mobile (which is what they do).
I guess one man’s “gory” is another’s “informative”.
So that’s interesting since I’m using PyTorch 1.14 on my dl rig (using a GPU) and in my FastAPI app (no GPU, just CPU).
As for fastai … I installed it via conda on my dl rig, but thru pip on my local machine where I’m building the FastAPI app. Both are set to use 0.0.15
So is there a issue where conda install packages are different from the pip install packages … even if we are locking them down to the same version?
There are no conda packages for fasta2 (yet), so I don’t think that’s the problem. To confirm this is the actual error, just try an app that only has the line load_learner
. Also, as usual, I’d need to see the full code you’re running
I avoid conda
altogether as it a little too magical for my taste.
Here is how I set things up: https://forums.fast.ai/t/setting-up-fastaiv2-locally/65856
from io import BytesIO
from fastapi import FastAPI, File, UploadFile
from pydantic import BaseModel
from fastai2.vision.all import *
path = Path(__file__).parent
app = FastAPI()
# --- fastai2 tests ---
inf_learn = torch.load(path/'export.pkl', map_location='cpu')
inf_learn.dls.cpu()
classes = inf_learn.dls.vocab
@app.post("/predict")
async def predict(image: UploadFile=File(...)):
img_bytes = await image.read()
img_np = np.array(Image.open(BytesIO(img_bytes)))
img = PILImage.create(img_bytes)
pred_class = inf_learn.predict(img_bytes)[0]
return {
"predicted_class": pred_class
}
and here is what I have version wise when I run conda list
:
fastai2 0.0.15 pypi_0 pypi
fastcore 0.1.16 pypi_0 pypi
torch 1.4.0 pypi_0 pypi
torchvision 0.5.0 pypi_0 pypi
… NOW … on my dl-rig I used the course’s environment.yml
to install things and it is a bit different:
fastai2 0.0.15 pypi_0 pypi
fastcore 0.1.16 pypi_0 pypi
pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
torchvision 0.5.0 py37_cu101 pytorch
So is the conda version of pytorch 1.4 different than what we get from pip 1.4? That seems really confusing if that is the case.
are you using virtualenv?
So can you just try with removing everything but inf_learn = torch.load(path/'export.pkl', map_location='cpu')
(by the way there is load_learner
with a cpu arg now).
Yes. I use virtualenv
.
Yes, I tried that first … same error
inf_learn = load_learner(path/'export.pkl')
classes = inf_learn.dls.vocab
The classes
comes back all good … but the predict
returns the same stack trace.
Oh add something with num_workers=0 to your dl. It’s the multiprocessing that is broken there.
Where do I add that?
Eh… you’re using predict so I guess this is on me to fix predict…
Can you try
dl = learn.dls.test_dl([img_bytes])
dl = dl.new(num_workers=0)
learn,get_preds(dl=dl)
and see if it has the same bug?
Or just
dl = learn.dls.test_dl([img_bytes], num_workers=0)
learn,get_preds(dl=dl)
Well its not returning an error (good) … but its not returning anything besides null
dl = inf_learn.dls.test_dl([img_bytes], num_workers=0)
pred_class = inf_learn.get_preds(dl=dl)[0]
return { "predicted_class": pred_class }
returns
{
"predicted_class": {}
}
If I do this:
dl = inf_learn.dls.test_dl([img_bytes], num_workers=0)
output = inf_learn.get_preds(dl=dl)
return {
"get_preds": output,
"classes": classes
}
I get
{
"get_preds": [
{},
null
],
"classes": [
"black",
"grizzly",
"teddy"
]
}
Weird. It should be the predictions.
Pushed a fix in Learner.predict, if you have any way to install fastai master.
I’ll investigate more tomorrow morning.
Cool. Thanks.
Is there a pip install
syntax to simply grab from master? Or do I need to do a editable install?
The latter.
Tested and it works!
Lmk when you and the fam are in San Diego … beers on me
Btw, what did you change? Did you just adjust the num_workers
on the test_dl
?
I changed the one line in predict
with test_dl
yes, to put num_workers=0
.
Would adding albumentations
to fastai be as simple as creating a new Pipeline
object and using that as training augmentation?
A new Transform
I believe. I actually will try to add an example of this in the tutorial I was mentioning before.