How do we use our model against a specific image?

ty! I will try that this evening.

It’s nearly the same as to what you did,
It also adds a new axis to make it a 4d Tensor and conversion to numpy array with dtype as float…

np.eimsum() is just switching axis here(in high probability it’s doing this only as images are read with cv2)

1 Like

Tried that, still no luck.

Here is the code snippet…(ty for the help BTW)

23 print("Loading model...")
24 learn = ConvLearner.pretrained(f_model, data, metrics=metrics)
25 learn.load(f'{sz}')
26 #learn.precompute=False
27 #learn.load("tmp")
28
29 print("Predicting...")
30 img = cv2.imread(f'{PATH}valid/bloom.jpg')
31 img = cv2.resize(img, dsize = (200,200))
32 img = np.einsum('ijk->kij', img)
33 img = np.expand_dims(img, axis =0)
34 img = torch.from_numpy(img)
35 preds = learn.model(Variable(img.float()).cuda())
36 print(preds)

Here is the error:

Traceback (most recent call last):
  File "predict2.py", line 35, in <module>
    preds = learn.model(Variable(img.float()).cuda())
  File "/home/ubuntu/src/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/src/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
    input = module(input)
  File "/home/ubuntu/src/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/src/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 37, in forward
    self.training, self.momentum, self.eps)
  File "/home/ubuntu/src/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/functional.py", line 1011, in batch_norm
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size [1, 1024]
1 Like

@jeremy do you have a suggestion here? I could have sworn I saw an example of this in the notebooks before, but I cannot find anything now.

Can you share your test image size?

200*200?

Coloured Image?

img = cv2.resize(img, dsize = (200,200))

Change the IMG sizes in this line accordingly to your images

Raw image size is 488x844. Most of my images will be slightly different. I did the resize to match these dimensions with the same issue.

BTW, this image works fine when it is in the test directory and running the model with isTest=true

It isn’t a notebook, it is the code I put in the first post. I also used the code you provided as well. I could try to post the model itself somewhere if that would help. I am checking to see if I can post the dataset…there is nothing proprietary in it, but gotta be legit.

I really appreciate the effort in trying to help. Seems like this should be a simple thing. :slight_smile:

Posted your code on Stack Overflow

1 Like

Ha, as much as I use SO, I didn’t think to post it there. TY! Can you post the link?

I am really surprised there isn’t a clear cut way to do this. It seems like it should be a standard feature. AM I thinking about it incorrectly? I understand most things would probably be done in batch, but this is a valid use case.

NOT sure whether it will be answered or downvoted…

possibly its because
you can't use feature-wise batch normalization if you only have 1 element per-feature.
It’s updated with PyTorch 0.3

It will fail as we train on batches of size 1 if we use feature-wise batch normalization.. 

Batch normalization computes:
y = (x - mean(x)) / (std(x) + eps)
If you have one sample per batch then mean(x) = x, 
and the output will be entirely zero (ignoring the bias). 
We can't use that for learning.

@yinterian(sorry)

It’s in lesson 3

I need to rush out in a second and haven’t used ConvLearner in ages, but this seems like that you are passing a tensor of wrong dimensions.

The model expects batches: [batch_size, 3, x, x] (don’t recall from the top of my head if channels come first but I think they do).

You are probably passing something like [3, x, x] but with single image you should pass [1, 3, x, x]

1 Like

Ty for the feedback @radek. I will look into how to do that, but if you have a code example would you please paste it here? I am not a python developer so a bit green on the language.

Can you print some .shape

We have tried that?

Via np.expand_dims(x, axis=0)?

(844, 488, 3)


img = cv2.imread(f'{PATH}valid/bloom.jpg')
print(img.shape)
img = cv2.resize(img, dsize = (488,844))
img = np.einsum('ijk->kij', img)
img = np.expand_dims(img, axis =0)
img = torch.from_numpy(img)
preds = learn.model(Variable(img.float()).cuda())


@jeremy(sorry)

1 Like

Hmm, is it reasonable to downgrade to 0.2? Wonder what other issues might crop up.