Learn.fit_one_cycle makes me run out of memory on CPU while I train on GPU

Following the suggestion of RuntimeError: DataLoader worker is killed by signal, I tried using a num_workers of 0 when creating my databunch, unfortunately, this didn’t resolve my issue. (By which I mean the process ate all the RAM during the training of the first epoch).