I got it working!
I had to basically trick the ImageDataBunch constructor by feeding it some fake filenames, but I appear to have managed to deploy a pre-calculated model on a non-GPU machine!
Here’s the rough outline I used. I had to hard-code in my list of labels:
cat_images_path = Path("/tmp")
cat_fnames = [
"/{}_1.jpg".format(c)
for c in [
"Bobcat",
"Mountain-Lion",
"Domestic-Cat",
"Western-Bobcat",
"Canada-Lynx",
"North-American-Mountain-Lion",
"Eastern-Bobcat",
"Central-American-Ocelot",
"Ocelot",
"Jaguar",
]
]
cat_data = ImageDataBunch.from_name_re(
cat_images_path,
cat_fnames,
r"/([^/]+)_\d+.jpg$",
ds_tfms=get_transforms(),
size=224,
)
cat_learner = ConvLearner(cat_data, models.resnet34)
cat_learner.model.load_state_dict(
torch.load("usa-inaturalist-cats.pth", map_location="cpu")
)
Now I can evaluate a new image like so:
img = open_image(BytesIO(bytes))
losses = img.predict(cat_learner)
prediction = cat_learner.data.classes[losses.argmax()]
Note that I’m loading models.resnet34 (which means downloading it the first time the code is imported) even though I don’t think it’s actually needed since I’m using the model from disk. I couldn’t figure out how to call the ConvLearner
constructor without it.