I have built some pretty cool fastai models to do things like degrain or upres an image all trained on 224x224 sized crops. How do I use these models on higher resolution images with non square inputs? I quickly run out of GPU memory when I try. Do I move everything over to CPU so it can use system memory to hold everything? Or do I have to cut up each image into square chunks, infer, and then put it all back together? I know there are tools like gigapixel ai and letsenhance that seem to do this quite easily and I would love to hear if anyone here has some ideas. Otherwise this seems like a huge limitation to using these models in production. Thanks!
1 Like
Any thoughts on this? I am really curious on how anyone can have enough gpu memory to upres a 2k file to 4k.What am I missing?