Using PyTorch Docker Image from Nvidia (No CUDA installation required)

There are two ways to do this that I know of.

A: Edit the Dockerfile that the image was built from (or an equivalent Dockerfile) and build a new image.
B: Run a container from the image, make your changes, and then commit the container to a new image.

Method B is faster, however I prefer method A because what’s going on in the image becomes transparent and easier to understand. If you’d like to try method A, I suggest reading some of @hamelsmu’s Docker tutorial and then checking out my Dockerfile for fast.ai and the accompanying README for reference as you write your own.