I’m going through part 1 of Jeremy’s DL course and working on a couple personal projects, so I’m spec’ing out a custom rig to train my models on. I want to know how important/helpful it is to have more RAM in your machine for DL. With how expensive memory is, I’m trying to decide between 32gb and 64gb of RAM.
Also, how are people storing/using their HDDs, SSDs, and NVMe drives when training models? What do you put on each one and what are the benefits/drawbacks of each one?
After doing a fair bit of research I came across the useful rule of thumb of making sure to have at least twice as much system ram as the amount of ram on the graphics card(s) you will be using, to insure no major bottle necks in the swapping in and out of data.
Im thinking of getting an nvme drive, do you have an idea on practical speedup during data prep and learning for a nvme vs ssd. On paper they are faster but does this translate to more than a few percent speedup?
I edited my post to include data prep which to be totally clear is what i meant-the whole build a model process. Ill probably get an nvme anyway at some stage was just curious on if the speedup over an ssd is noticable in practice
You should definitely use the 64 GB RAM, since you’re putting in 2 1080Tis-you don’t want RAM to be the bottleneck.
Also NVMe Are much faster than SSD which are much much faster than HDD.
Keeping a NVMe of a size = {Imaginable usage by Ubuntu-for me<50GB-including the anaconda mess. Anaconda Env files go upto 5-10Gigs if you ever look into them}+Size of datasets in use-for me the greatest is 60GB+Swap (General rule of thumb>=2xRAM).
SSDs are slightly slower but if you don’t mind spending the extra buck-NVMe are definitely worth it.