Dedicated machine

Hello all,

I am thinking of investing in a dedicated machine for Deep learning to avoid recurring costs on Amazon. Would like to get the best possible system without spending a fortune. My budget is $2000. Seems like following GPUS are the latest and recommended: 980 Ti or GTX 1080/1070/1060 for that budget. So looks like these are the options:

  1. For laptops, GTX 10 series notebooks seem like good options.
  2. Old desktop + new GPU
  3. Another alternative seems like a reserved instance on AWS.

Would like to know your recommendations here.

Thanks,

1 Like

Hey @sravya8 $2k can get you a great purchase for a GPU rig at home to prototype with.

*disclaimer: this is totally my opinion based after trying out a few approaces : )

Note: The thing to keep in mind is this, having a GPU rig is good enough for prototypes, for the long run and big experiments, an EC2 (or bigger machine) is required regardless.

For example, making small tweaks, playing around with small batch sizes work well, but training a big network (convnet specially) can be resource consuming and requires at least a bigger machine equivalent to the EC2 ones

  1. Laptops is a bit less recommended, due to heating / battery and the fast pace of the GPUs coming out. However, there are 1080s now as you have linked to on the GTX 10 series.

  2. Old desktop, I would say a new Motherboard + good Power Supply Unit (PSU) can allow you further upgrade in 2 years from now.

  3. Reserved instances can be a bit pricey ahead of time for individuals : /!

If you ended up choosing GPUs, here is a build I made 2 weeks ago: 1080 rig, this is a decent setup to be upgraded as well. though for the price, I highly recommend getting at least 1080, since 8Gb of memory on a GPU disappears so easily.

2 Likes

You can buy an old Dell workstation on Ebay for about $300 thatā€™s the equivalent of a $1500 new computer. Then just pop in your own graphics card. Iā€™d recommend the 1070 (8GB) over the 1080 since itā€™s much better value, and isnā€™t that different in terms of performance. You could even buy two!

6 Likes

Thanks, I would also like to build my own rig. I like the idea of buying an cheap desktop and putting in a really good graphics card. I have two questions Iā€™d like answered. Maybe someone can help.

A) If I have an NVIDIA card, that can support CUDA, will any of the other parts be important to choose? I was looking at some cheaper desktops with AMD processors on Craigslist. Would there be an problem with getting an AMD processor / motherboard if I can attach an NVIDIA card to it?
B) Should I pay much attention to the PSU? I feel like training ML models could be memory intensive. Maybe something with 500-600w will do the trick.

If anyone knows anything about this Id be really grateful.

Anaconda uses Intelā€™s MKL to speed up numpy, so an Intel CPU might be a good idea. Iā€™d also suggest making sure you can install plenty of RAM, since that can be very helpful for speeding up pre-processing. So check the motherboard details and see if you can install at least 32GB (even if you donā€™t buy the RAM now, youā€™ll want it later).

Iā€™m guessing you meant ā€˜power intensiveā€™. Yes definitely an issue. 550W would be my recommendation.

1 Like

Thats so helpful, thanks!!

Thank again for your inputs @yad.faeq and @jeremy !
+1 for upgradability

Hello all,

I have a workstation(desktop) with i5-6500 processor , 8gb ram and GTX 1060 graphics card. Is it enough for doing this course or do i need to get an aws instance?

I have 8GB box with NVidia 980 Ti that I bough originally for bitcoin vanity address calculations.
Then found this course - and all into it.
I found that 8GB RAM suffocates on preloading of images in RAM.

Went to buy extra RAM and found that stores no longer carry that types of RAM - so couldnā€™t upgrade.
So 8GB is kind of pushing it - however is enough just to crawl through course.

I love yad.faeq post (thank you for that) about the custom rig he built - very helpful.
My plan is to build custom rig like that with 32-64GB or RAM and 2 slots for GPUs - for eventual scaling of computations.

1 Like

@vijinkp I have a GTX 970 (4GB) and Iā€™m getting a lot of Memory Errors when I try to train several layers (even with really small batches) or when I run trainings with several epochs, so a GTX 1060 should have at least 6GB to have ā€œgoodā€ performance, the 3GB version seems too limited to me (based only on my experience :slight_smile: ).

3GB is OK if you use small batch sizes.

Thanks for the answer @jeremy. Yes, I might be a problem of my configuration or setup, Iā€™ll try to clean up all the used memory each time I run a cell :slight_smile:

Thanks @devsp, @jeremy and @gesman for the inputs.

Love this thread. I just followed in yadā€™s footsteps and put together my own machine. Incidentally I didnā€™t see his parts list prior to my build, but it looks like we arrived at the same conclusions. I wrote a quick blog post going over the process here:

3 Likes

@brendan thanks for the great writeup on building your own box. How do you find the training performance on this machine compared to a P2 instance on AWS?

I just ran the CatsDogsRedux notebook on a p2.xlarge and first fit took 602 secs. My first fit took 245 secs. Roughly 2.5x faster.

1 Like

Thanks for all the help I got here, I finally got my machine setup last weekend and wrote up a small blog post which builds on Brendanā€™s post and targets slightly lazy people :slight_smile: https://medium.com/@sravsatuluri/setting-up-a-deep-learning-machine-in-a-lazy-yet-quick-way-be2642318850#.k6txcrrfw

1 Like

Great blog posts @sravya8 and @brendan :grinning:!

Is it possible to add another GPU to this setup later, or you would recommend Xeon series (with 40 PCI lanes?)

Thanks!