I’m creating this separate topic from the
fastpages discussion topic in order to avoid jamming the other topic as these 2 topics can be seen as different use-cases yet with some common information that can be shared between the two.
I think It would interesting to gather some information about how to setup a docker container with a GPU support (even if it’s a little bit hacky as I understand from the github discussion) for both training and inference (prediction) purposes. In a deployment context, inference using GPU could also be needed because it’s more efficient than the CPU one (think about batch inference with heavy compute).
@Narsil, in your post (as shown here below) you pointed out that you managed creating a docker container with GPU support. It would great if you can write a blog post (with let’s say
fastpages ) where you describe the steps to achieve that, in a user-friendly way, where users (with different backgrounds) would be able to easily reproduce the GPU-Supported container. If it isn’t possible in a blog post format, maybe as a standard post. I think many fastai users would benefit from that information, and may trigger some feedback from those trying to accomplish a similar task.