I’m trying to implement Grad-CAM as nicely shown by @henripal and @MicPie here:
@henripal is using the learner to access an image batch
Now, in my case I don’t have that anymore, I load the model and do inference on an image (till now with predict()) like so:
MODEL = 'stage-2.pth' path = Path("/tmp") data = ImageDataBunch.single_from_classes(path, labels, tfms=get_transforms(max_warp=0.0), size=299).normalize(imagenet_stats) learn = create_cnn(data, models.resnet50) learn.model.load_state_dict( torch.load("models/%s" % MODEL, map_location="cpu") )
I setup the hooks as suggested in the post, but I fail to make it work with my setup… How do I go from my Image file to a tensor that fits the model (ResNet-50, 299px)?
In the example
out = learn.model( img_tensor )is used …
I somehow need to go from my image to a tensor in the right format… Any kind soul know where I should look?