Dynamic Unet: different image size

In lesson3-camvid we create a learner for each image size, means we cannot use images of arbitrary sizes on the fly? What I am trying to do is to train an unet first with 256, then 384 and finally 768 image size.

Below I tested a code snippet where the network was initialized with the data of size 256x256 and tried to predict data of size 384x384. As can be seen, the output shape is fixed to 256x256.

img = data.valid_ds[0][0]

print(img.shape)
# output: torch.Size([3, 384, 384])

print(learn.predict(img)[0].shape)
# output: torch.Size([3, 256, 256])
1 Like