How to disable all form of multiprocessing in FastAI and Pytorch ? Why does my custom transform block causes CUDA multiprocessing error?

Set num_workers to 0 in that DataLoaders call, not workers. The other solution is just don’t have your device set to cuda in that transform. Fastai should do that automatically for you