Learn.predict() always returns same value

edit: running interp.plot_top_losses() has shown that the model does indeed predict both classes. I looked into the cnn_learner docs, and see that it does have a normalize param which defaults to true. How can I extract the stats used to normalize the data, and how can I use the Normalize class to do the same transformation on data I want to test by hand using learn.predict()?

Inspired by this post about transforming time series to images for use with a CNN, I made a model which predicts between two classes, which seems to work pretty well.

|epoch| train_loss| valid_loss| error_rate| time|
|3| 0.121610|. 0.081726| 0.026846| 00:11|

All I’m doing is transforming some time series data of shape (1, 150) into a GramianAngularField image using pyts (see link above where it is described, but it’s not too important, it’s just an image).

The model is simple as possible:

bs = 16
dls = ImageDataLoaders.from_name_re('data/cnn_images', fnames, pat='\@(.*)', bs=bs)
learn = cnn_learner(dls, resnet34, metrics=error_rate).to_fp16()

I’ve checked with Dataloader.show_batch(), and everything looks correct.

So wanting to dig in, I pulled some of the original time series, computed them into a GramianAngularField again in the same way, and used learn.predict() on the image. If anything, I figured this would be too easy for the model, because these exact images were probably in the training set.

But instead, every prediction is class “1”. Counting the training images I passed in, they are somewhat mismatched, about 92% of one class.

yy = [str(x) for x in dls.train_ds.items]
nons = len([g for g in yy if 'notarrival' in g]). # get classes from names
print(f'{nons} non arrivals\n{len(yy)} total labels\n{round(nons /1788, 3)*100}% negative')
1640 non arrivals
1788 total labels
91.7% negative

If my model really did predict the same thing every time, wouldn’t it have ~91.7% accuracy, instead of above 97%?

Is there a transformation (perhaps normalization) that takes place in the model or dataloader, even though I didn’t pass it any transformations? How can I see what they are, so I can learn.predict() on newly created GramianAngularFields and make sure the predictions vary as they’re supposed to.

Finally, learn.get_preds()[1] returns:

tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1…etc])

This seems to show that the model does indeed make predictions for each class. How can I see the images that correspond to these predictions? I have tried to find the answer but many forum questions seem to be from 2018 and contain outdated functions.

Thank you for your time!