@Atom-101 Thanks for the address!
I would like to summarize and supplement a few examples for those readers with the same question in mind.
1.If kernel_size is odd, you can get same padding from conv_layer() by default
It’s because of the default setting of conv_layer().
I extracted related source code from here for your reference:
if padding is None: padding = (ks-1)//2 if not transpose else 0
you can quickly test this out by the following experiments:
In [20]: from fastai.layers import conv_layer
In [21]: import torch
In [22]: t = torch.randn(1, 1, 5, 5)
In [23]: conv_3x3 = conv_layer(ni = 1, nf = 3, ks = 3)
In [24]: conv_3x3(t).shape
Out[24]: torch.Size([1, 3, 5, 5])
In [25]: conv_4x4 = conv_layer(ni = 1, nf = 3, ks = 4)
In [26]: conv_4x4(t).shape
Out[26]: torch.Size([1, 3, 4, 4])
As shown above, conv layer with kernel size = 4 fails to preserve input size (i.e. width and height)
2.if kernel size is even, you can still get same padding by conv_layer() augmented with nn.ZeroPad2d()
In this case, the padding is not evenly spread on 4 sides (i.e. top, bottom, left, right). you need to add extra padding by nn.ZeroPad2d()
:
In [27]: import torch.nn as nn
In [28]: extra_pad = nn.ZeroPad2d((0, 1, 0, 1)) # (left, right, top, bottom)
In [29]: conv_4x4(extra_pad(t)).shape
Out[29]: torch.Size([1, 3, 5, 5])
One more note is that nn.ZeroPad2d
should come first, then conv_layer()