Size error in using pretrained architecture other than resnet

RuntimeError Traceback (most recent call last)

in ()
----> 1 learn.lr_find()

13 frames

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in _max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode, return_indices)
537 stride = torch.jit.annotate(List[int], [])
538 return torch.max_pool2d(
–> 539 input, kernel_size, stride, padding, dilation, ceil_mode)
540
541 max_pool2d = boolean_dispatch(

RuntimeError: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small

I am using vgg_16 and my input size is of (3, 30, 30).

I am getting the size error in using other architecture except resnet and in other dataset also.

It might be because VGG networks requires the input image of a particular size, i.e (224, 224) while models like Resnet can adapt(look for Adaptive Pooling layers in Resnet) to any image size.

1 Like

Thanks @salil_23 i totally forget it.

1 Like