Tips to reduce memory requirements for fastai model during inference inside docker

How are you launching the docker image? Generally you should be able to raise the shard memory space of the docker image when launching it.

E.g. you can pass:

--shm-size=24G