Fastai to ONNX to tensorflow conversion error. Where is 'replicate'/'edge' padding used in Resnet DynamicUnet?

I am trying to convert the resnet34 DynamicUnet to tensorflow for inference optimisations.
I have successfully converted fastai’s pytorch model to ONNX and am now trying to use the onnx_tf library to convert the ONNX model to a tensorflow model.
Upon trying this, I get the error:

ValueError: Cannot take the length of shape with unknown rank.

I posted this to the onnx_tf github where I received an answer saying this is because the model uses ‘edge’ padding which is not supported on tensorflow. I’ve seen this type of padding referred to as ‘border’ in fastai’s augmentations, and elsewhere as ‘replicate’ padding.

However, looking through the source code, I can’t figure out where this type of padding is used. As far as I can tell, the resnet conv2d layers use default ‘zeros’ padding, but I could be wrong. Can someone tell me where else is this padding used in the model? I will try to adapt the model to use tensorflow supported padding. Or if anyone has successfully changed the fastai resnet unet to tensorflow, please let me know how you did it. Thanks

Think I found it, it’s Pixelshuffle’s ReplicationPad2d:

(shuf): PixelShuffle_ICNR(
(0): ConvLayer(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
(1): ReLU()
(1): PixelShuffle(upscale_factor=2)
(2): ReplicationPad2d((1, 0, 1, 0))
(3): AvgPool2d(kernel_size=2, stride=1, padding=0)

Did you try using Openvino for inference optimization instead?

Hi, thanks for the suggestion. I’m not using Intel hardware - planning to do inference on mobile/web with tensorflow js so I can’t use Openvino.

If so, did you have a chance to look at the ONNX offerings for mobile and web? What I understand is converting models/trained weights from pytorch to tensorflow (and vice versa) is quite cumbersome. I recalled this post by huggingface team describing the complexities in doing so.