Hi,
I’ve read and implemented this great tutorial (http://dev.fast.ai/tutorial.siamese, 24_tutorial.siamese.ipynb) and ended up with a network with astonishing performance.
Now I’d like to modify/use it in a way so I can precalculate the features for images and store them in a DB so that during inference, I only have to pass one image through the model. I am aware that with the current architecture, as presented in the notebook, I will still have to pass both images through the head of the model. But I thought I could try and avoid having to calculate the body part each and every time.
Being new to fastai I tried the following which, alas, didn’t work as I had hoped…
# prepare images using the same transforms that we pass to the dataloaders (as after_item)
i1 = files[0]
im1_final = Resize(224)(im1)
im1_final = ToTensor()(im1_final)
im1_final = IntToFloatTensor()(im1_final)
i2 = files[2]
im2_final = Resize(224)(im2)
im2_final = ToTensor()(im2_final)
im2_final = IntToFloatTensor()(im2_final)
# do what the forward method does
# my goal is to store the result of learn.model.encoder in a DB and just retrieve it when doing inference
# reason: I will usually want to compare one image with all that are in the DB to recognize objects I’ve seen before
# this should come from the DB for one image
enc_body1 = learn.model.encoder(im1_final.cuda().unsqueeze(0))
enc_body2 = learn.model.encoder(im2_final.cuda().unsqueeze(0))
# this will always have to be done
ftrs = torch.cat([enc_body1, enc_body2], dim=1)
ftrs_final = learn.model.head(ftrs)
print(ftrs_final)
Unfortunately, this doesn’t yield the same result as learn.predict(). Could someone please point me in the right direction? What would be the(or one) right way to achieve such a “shortened” inference.
Thanks in advance & best regards,
Chris