Inference always results in same class

I have a model I trained in fastai. It achieves 95% accuracy, but my predictions are always the same class. I’ve never had this problem using rgb images, so I am assuming something is wrong with the fact that the data is grayscale.

For reference, the model was trained on gray scale images that were already in shape (24, 24, 3). When I run inference on test images, they are already (24,24,3). I don’t know if this has any effect, but when I look at the print out from my data object, I see images in shape (3,24,24), see print out below. I’ve tried getting my test images into this shape (swapping the axis) but they are no longer accepted as proper input. What do I do?

Here is my training code:

data = ImageDataBunch.from_folder(path, train='train', valid='valid', size=24, bs=64, num_workers=8)
learn = cnn_learner(data, models.resnet34, metrics = [accuracy])
learn.fit_one_cycle(4)
learn.unfreeze()
learn.lr_find()
learn.recorder.plot(suggestion=True)
learn.fit_one_cycle(2, max_lr=slice(3e-7, 3e-6))
learn.export('model.pkl')

Now for inference:

l = load_learner('', 'model.pkl')
r = random.randint(0,100)
t =  glob.glob('fastai_format/test/pos/*')[r]
img = cv2.imread(t)
print(img.shape)

This gives (24,24,3)

And for the actual prediction:

pil_im = PImage.fromarray(img) 
x = pil2tensor(pil_im ,np.float32)
l.predict(Image(x))

This gives me:

(Category tensor(1), tensor(1), tensor([0., 1.]))

I get class 1 every time. Its a 2 class problem. I even get class 1 everytime if I run inference on training data. The model yields 95% so this is confusing.

What am I doing wrong?

Extra info: Here is the print out for data:

Train: LabelList (9603 items)
x: ImageList
Image (3, 24, 24),Image (3, 24, 24),Image (3, 24, 24), ...
...

I am using fastai version 1.0.61.

May I ask if there are a lot of class 1 images compared to class 2 images in your dataset? like there are only a few class 2 images

It’s slightly imbalanced. 3800 in one class at 5800 in the other.

However, looking at the confusion matrix, it doesn’t seem like a problem:

just guessing, maybe you accidentally turned your original image to a random, white-noise image somewhere along the line. Like pil2tensor does some transposes to your image

def pil2tensor(image:Union[NPImage,NPArray],dtype:np.dtype)->TensorImage:
    "Convert PIL style `image` array to torch style image tensor."
    a = np.asarray(image)
    if a.ndim==2 : a = np.expand_dims(a,2)
    a = np.transpose(a, (1, 0, 2))
    a = np.transpose(a, (2, 1, 0))
    return torch.from_numpy(a.astype(dtype, copy=False) )

Are you recommending I preprocess test images with this function? If so, what do I pass for dtype?

Hi @cmeaton

Could you modify the inference as follows and check what you get?

learn = load_learner('', 'model.pkl')
r = random.randint(0,100)

# If you want predictions for only one item
t =  glob.glob('fastai_format/test/pos/*')[r]
test_dl = learn.dls.test_dl([t])

# If you want predictions for all items
t =  glob.glob('fastai_format/test/pos/*')
test_dl = learn.dls.test_dl(t)

# Get the predictions
preds, _ = learn.get_preds(dl = test_dl)
print(preds)

This way, let fastai do all the internal transformations and you don’t worry about loading an image, normalizing it or converting it into a CHW format or some other format etc. and then let’s see what we get.

Thanks,
Vinayak.

1 Like