Right way to use loaded models for one image (on CPU)

I saved the model as in Lecture 8 and I am loading it as follows: (I’m using it for binary classification hence c=2 and is_multi = False)

from fastai.conv_learner import *
from fastai.dataset import *
from fastai.core import A, T, VV_

sz = 224
trn_tfms, val_tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO)
model = ConvnetBuilder(resnet34, 2, is_multi=False, is_reg=False).model
state_dict = torch.load('models/clas_one.h5', map_location=torch.device('cpu'))
model.load_state_dict(state_dict)
model.eval()

test_img = open_image('images/001.jpg')
batch = [T(val_tfms(test_img))]
input_x = VV_(torch.stack(batch))
pred = model(input_x)
print(pred)

Is this the right way to do inference on one image?

This is touched on briefly in lesson 6. Minute 13:15. There’s not a ton of info in there but it’s worth watching. fastai has a “load_model” function in torch_imports that can be used but it takes a model.

@Swair did you find that your solution works?