Model loaded with load_learner() gives different results with and without dataloaders

I trained a simple visual model, saved it with learner.export and later imported it with load_learner('path') to try and make some predictions on two test images. It obviously works, but after some time I wanted to print a confusions matrix so I added dataloaders object to it

learner.dls= my_dataloaders

Then I ran predictions on the same 2 input images as before but now predictions are different, how come? My (apparently wrong) understanding is that model weights and architecture is detached from data associated in the same learner object, so why would only attaching dataloaders structure change the model outcomes? (I didn’t run any tuning or fitting method before making another predictions after loading dataloaders)

learner = load_learner('modep_path')

output, indices , pcnt = learner.predict(image_file.copy()) 
print(pcnt[indices.item()])      # 0.866 for the record 
learner.dls=my_dataloaders
output2, indieces2, pcnt2 = learner.predict(image_file.copy()) 
print(pcnt2[indices2.item()])    #0.999 for the record
1 Like

Hmm, do your dataloaders that you assigned have any different transforms or augmentations to the original dataloaders the learner was trained on?

Well, I made quite a few changes to the datablock before I noticed that strange behaviour, I messed a lot with aug_transforms and item_tfms – used a lot of RandomResizedCrop
Still, how come data that is merely attached to the learner changes its behaviour?

I think it’s because whatever image you input to your learner for inference, it applies the exact same transforms and (maybe not) augmentations to the image. This is because the learner was trained to view images in a particular way.

I don’t think the same augmentations are applied unless explicitly stated, but I do think the same transforms are applied. It’s a bit fuzzy in my head.

But I already make sure to crop the input image to the size given in batch_tfms, what I believe is called “presizing” from chapter 4 of the book, so I first crop images to 480 in RandomResizedCrop and then crop them and let them undergo any transformation in aug_transformsagain to the size of 256 .
And my new input images are already 256x256 so what would RandomResizedCrop have to do?

I assume you don’t have the same spelling mistake in your code and it is just a typo in the post?

output2, indieces2, pcnt2 = learner.predict(image_file.copy())
print(pcnt2[indices2.item()]) #0.999 for the record

If your image is already 256x256, then that image can’t be cropped to 480x480. Perhaps something is happening there? I’m not too sure myself, perhaps somebody else can pitch in.

Double check for that typo AllenK mentioned too.

Naturally I don’t , I only corrected it in a forum post

I realized that batch_tfms are applied to the input data as well, why would that happen? I thought they were only applied per-batch while training.

I ran some more tests and I really can’t tell why passing the exact same dataloader (along the same splitter random seed) to the loaded learner results in different output.
I used with_input=True argument to the .predict() method to retrieve image as is modified by the attached dataloaders instructions, compared such images from both of learners (with and without dataloaders) and they look the same (one from the learner with dataloaders is zoomed in at most 2%) but both give different class outputs now.

Also another question, why is error_rate from the final epoch while teaching different from manually running error_rate(learner.get_preds()) later on?

1 Like

Hi @snake, this sure sounds strange. Do you have a colab or kaggle notebook you could share?

I do everything locally, still learning so I moved onto tabular data models now.
But still can’t figure out why dataloaders mess things up, even when they are exactly the same as when training.

Were you able to diagnose the problem? I’m facing the same issue.