Lets say I trained a good model from input 256 x 256 size. Now I want the model to take a 4k resolution image as input in production. I probably don’t want the reduce the 4k to 256 before I feed into the model.
I assume that I will need to train a model with 4k resolution input. Wouldn’t I ran out of memory even with 1080ti and takes years to train? What kind of machine do I need to train this?
Love the article, but I am still not fully clear about the answer to my question. It looks like I can add a custom head translate 4k to 256? I think doing this should still kill my gpu memory.