Denoising Autoencoder Using Variable Input Dimensions

Greetings all, I’m new around these parts but hoping to get some guidance regarding how to leverage pre-trained convolutional autoencoders for the purpose of denoising images prior to running OCR.

This Medium blog post does a wonderful job explaining how to go about training and scoring a model using Kaggle data from the Denoising Dirty Document Image contest.

What I want to do is leverage a similar model trained on an environment with GPU’s, export the trained weights and model architecture, and then score on different data within a non-GPU environment. My main question - is it possible to score a fully convolutional AE on images of varying dimensions or do I need to pre-standardize all of my input images to be of the same dimension before scoring?

I tried scoring the model from the blog post above on a much larger image and received this error:

ValueError: Error when checking input: expected input_1 to have shape (258, 540, 1) but got array with shape (2528, 3296, 1)

Obviously the best approach would be to take the actual data to be scored, augment the images by applying various transformations and random noise, training a model on those images, and then scoring the original images. Unfortunately, I am dealing with some sensitive data that cannot be moved off of the existing environment which does not currently support GPU acceleration.

Any help is much appreciated!