I want to train super res on high res images atleast 2048x2048. I have tried using 4 GPU’s with total 96GB ram, but still the largest I could train was 1700x1700 that too with just batch_size =1.
I am using superres code from course v3 .
My preference is training on 2048x2048 as I have tried splitting the images into smaller patches (512x512) but when I combine them, the patches boundaries are visible.
Any ideas on how to train on these big images?
Novice here, but possibly… try using “crop” aug_transforms with high level of crop.
This way you are training at full resolution but split into smaller sections with multiple passes to get full coverage. The built-in ones use random crops, or write your own:
At inference time you may need Test Time Augmentation (TTA):
However I get a strong feeling from the lecture series to question using such large images. An important consequence is that each epoch take a much larger amount of time to train, which slows down you improvement iteration loop. Perhaps consider “progressive resizing”, so you do most of the training with lower resolution and only max resolution for the last couple of epochs.
btw, Have you actually tried smaller resolutions, that in practice were not sufficiently good?
i.e. for several smaller resolutions can you report what error rate you experience?