Can't pass a single image through a model?

I’m running a model on the PETS dataset, following the tutorial at A walk with fastai2 - Vision - Study Group and Online Lectures Megathread. I want to be able to pass a single image through a resnet-based model that I’ve defined as -

class MyPetsNetwork(nn.Module):

    def __init__(self, arch= resnet18):
        super().__init__()
        self.cnn = create_body(arch)
        self.head = create_head(num_features_model(self.cnn) * 2, 37) # for 37 categories of PETS

    def forward(self, image):     
        x = self.cnn(image)
        x = self.head(x)
        return 2 * (x.sigmoid_() - 0.5)

I’ve created a model from this as model = MyPetsNetwork(resnet18).

And my single image is got by -

img = PILImage.create(fnames[0])
img = image2tensor(img)
img = img[None, :, :, :] # to create the batch dimension
img = img / 255.0

But passing it to the model as model(img) gives the error -

> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
>     720             result = self._slow_forward(*input, **kwargs)
>     721         else:
> --> 722             result = self.forward(*input, **kwargs)
>     723         for hook in itertools.chain(
>     724                 _global_forward_hooks.values(),
> 
> <ipython-input-35-1dc3616dbfb7> in forward(self, image)
>       7     def forward(self, image):
>       8         x = self.cnn(image)
> ----> 9         x = self.head(x)
>      10         return 2 * (x.sigmoid_() - 0.5)
>      11 
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
>     720             result = self._slow_forward(*input, **kwargs)
>     721         else:
> --> 722             result = self.forward(*input, **kwargs)
>     723         for hook in itertools.chain(
>     724                 _global_forward_hooks.values(),
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input)
>     115     def forward(self, input):
>     116         for module in self:
> --> 117             input = module(input)
>     118         return input
>     119 
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
>     720             result = self._slow_forward(*input, **kwargs)
>     721         else:
> --> 722             result = self.forward(*input, **kwargs)
>     723         for hook in itertools.chain(
>     724                 _global_forward_hooks.values(),
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py in forward(self, input)
>     134             self.running_mean if not self.training or self.track_running_stats else None,
>     135             self.running_var if not self.training or self.track_running_stats else None,
> --> 136             self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
>     137 
>     138 
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
>    2010                 bias=bias, training=training, momentum=momentum, eps=eps)
>    2011     if training:
> -> 2012         _verify_batch_size(input.size())
>    2013 
>    2014     return torch.batch_norm(
> 
> /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in _verify_batch_size(size)
>    1993         size_prods *= size[i + 2]
>    1994     if size_prods == 1:
> -> 1995         raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
>    1996 
>    1997 
> 
> ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 1024])

The error occurs because the model is running a batch norm since it’s expecting a proper batch (size>1) for input. How do I set it to not do a batch normalization for inference?

You need to do learn.model.eval() before performing the inference. (or in this case model.eval())

So running model.eval() before inference does give me the same value each time now. But, if I create the model again and do the inference (with model.eval() done before the inference), it gives a different result. (I feel like I’m missing a basic concept in all this, but I can’t figure that out)

Yes. All models weights are random at the start.

1 Like

That makes a lot of sense, thanks!