Image similarity

im making image similarity using resnet50 last laye[1][4] ,when i try to use sf = SaveFeatures(learn.model[1][4]) (hooks) method _= learn.predict(data.train_ds.x[0]),sf.features is giving different result
than
applying this on
tfms = get_transforms(do_flip=True,flip_vert=True,max_lighting=0.1,max_rotate=359,max_zoom=1.05,max_warp=0.1)
classes=data.classes
data3=ImageDataBunch.single_from_classes(path, classes,ds_tfms=tfms,

                                              size = 224).normalize(imagenet_stats)

learn1 = create_cnn(data,models.resnet50)
learn1.load(’/content/drive/My Drive/run it/Kvest-stage-a1’);
sf1 = SaveFeatures(learn.model[1][4])
learn.predict(open_image(img_path[3]))
sf1.features

https://www.kaggle.com/abhikjha/fastai-hooks-and-image-similarity-search
im using this example as a reference ,this is using pytorch hooks to extract feature

Strangely all the negative value are getting zero in the second case can any1 help

I am not familiar with SaveFeatures – what is it, where it came from, how it’s defined re: input and output?

However, if the only difference you observe is that negative numbers became 0, perhaps you should check if you are extracting the features (activations) from exactly the same ‘layer’. It could be that for some reason your first output (with the negative numbers) was extracted from a layer before a ReLU, and your second output (with zeroes) was extracted from a layer after ReLU?

Feel free to cut-down your notebook into an example notebook that you could share a link to, then I am sure someone here will be able to have a quick look and let you know exactly what went wrong.

Yijin

@utkb i have checked im using the same layer for both feature extraction i.e, one done for train dataset and one for single image test data .Both are giving different results.Also i have updated myh post with the link im using for reference.

Does the link you used for reference return the right features, and it’s only your own implementation of it (based on the reference notebook) that gave the differences? Or do both notebooks give that same unexpected difference in features? If it’s the former, the quickest way is to compare and debug that way, including asking the original author for some tips/help. If it’s the latter, I guess you’ll have to figure out what was actually wrong, perhaps again with comms with the original author?

Sorry, don’t have anything else to add without spending more time looking at that reference notebook and/or your copy-pasted code…

Yijin