Part 2 blogs

That’s a useful link. I don’t think running a model on such large images is a common requirement, so the fact that this edge case isn’t well supported by Pytorch yet with CPU inference is hardly a reason to say you can’t use it in production!

Agreed I’m sorry I wasn’t specific enough: Pytorch is not meant for production for all use cases. Now consider mine with SRPGAN. This stuff is just impossible to run on CPU for the aforementioned issue. Thing is, GPU memory are very limited, use an image with dimensions above 700x700 and you run out of cuda memory on a titan X. With the CPU you just add more ram, with the GPU you buy a new one (if one exists with you cuda memory requirements). And I’m not talking about the cost of gpu vs cpu. When you start scaling that becomes a real, expensive issue.

Probably best place to answer that question is classic computer vision papers. There’s been a lot of work on this over the decades. E.g. there’s a whole literature on adding and removing haze in images.

Interesting, I’ll def take a look, thanks for the tip :slight_smile:.

To me, the excellent support you received from the core team on that thread and the information provided about how this issue is being worked on gives me a lot of confidence that production use of the library is in fact a priority

Agreed. The support was very good and I’m also confident for the future but for now it’s… complicated :frowning:

I heard recently that the pytorch core team is working on onnx support which is going to make things a lot more straightforward as it provides an open source translation between different languages and tools.

Plus of course the ongoing work on ONNX support.

Onnx won’t solve everything unfortunately :frowning: . This is also something I considered when I deployed my model. Onnx need static models and using it for dynamic neural networks such as RNN or networks with variable input size (like on my project) won’t work. It basically works like this: You use a fixed size input, wrap your model around the onnx package and run a batch through it. Onnx will then infer the different layers by which your inputs passed through and create a resulting representation that you can export to other frameworks.
The way it works today makes it by nature very limited :frowning:.

4 Likes