I am thinking of investing in a dedicated machine for Deep learning to avoid recurring costs on Amazon. Would like to get the best possible system without spending a fortune. My budget is $2000. Seems like following GPUS are the latest and recommended: 980 Ti or GTX 1080/1070/1060 for that budget. So looks like these are the options:
Hey @sravya8 $2k can get you a great purchase for a GPU rig at home to prototype with.
*disclaimer: this is totally my opinion based after trying out a few approaces : )
Note: The thing to keep in mind is this, having a GPU rig is good enough for prototypes, for the long run and big experiments, an EC2 (or bigger machine) is required regardless.
For example, making small tweaks, playing around with small batch sizes work well, but training a big network (convnet specially) can be resource consuming and requires at least a bigger machine equivalent to the EC2 ones
Laptops is a bit less recommended, due to heating / battery and the fast pace of the GPUs coming out. However, there are 1080s now as you have linked to on the GTX 10 series.
Old desktop, I would say a new Motherboard + good Power Supply Unit (PSU) can allow you further upgrade in 2 years from now.
Reserved instances can be a bit pricey ahead of time for individuals : /!
If you ended up choosing GPUs, here is a build I made 2 weeks ago: 1080 rig, this is a decent setup to be upgraded as well. though for the price, I highly recommend getting at least 1080, since 8Gb of memory on a GPU disappears so easily.
You can buy an old Dell workstation on Ebay for about $300 thatās the equivalent of a $1500 new computer. Then just pop in your own graphics card. Iād recommend the 1070 (8GB) over the 1080 since itās much better value, and isnāt that different in terms of performance. You could even buy two!
Thanks, I would also like to build my own rig. I like the idea of buying an cheap desktop and putting in a really good graphics card. I have two questions Iād like answered. Maybe someone can help.
A) If I have an NVIDIA card, that can support CUDA, will any of the other parts be important to choose? I was looking at some cheaper desktops with AMD processors on Craigslist. Would there be an problem with getting an AMD processor / motherboard if I can attach an NVIDIA card to it?
B) Should I pay much attention to the PSU? I feel like training ML models could be memory intensive. Maybe something with 500-600w will do the trick.
If anyone knows anything about this Id be really grateful.
Anaconda uses Intelās MKL to speed up numpy, so an Intel CPU might be a good idea. Iād also suggest making sure you can install plenty of RAM, since that can be very helpful for speeding up pre-processing. So check the motherboard details and see if you can install at least 32GB (even if you donāt buy the RAM now, youāll want it later).
Iām guessing you meant āpower intensiveā. Yes definitely an issue. 550W would be my recommendation.
I have a workstation(desktop) with i5-6500 processor , 8gb ram and GTX 1060 graphics card. Is it enough for doing this course or do i need to get an aws instance?
I have 8GB box with NVidia 980 Ti that I bough originally for bitcoin vanity address calculations.
Then found this course - and all into it.
I found that 8GB RAM suffocates on preloading of images in RAM.
Went to buy extra RAM and found that stores no longer carry that types of RAM - so couldnāt upgrade.
So 8GB is kind of pushing it - however is enough just to crawl through course.
I love yad.faeq post (thank you for that) about the custom rig he built - very helpful.
My plan is to build custom rig like that with 32-64GB or RAM and 2 slots for GPUs - for eventual scaling of computations.
@vijinkp I have a GTX 970 (4GB) and Iām getting a lot of Memory Errors when I try to train several layers (even with really small batches) or when I run trainings with several epochs, so a GTX 1060 should have at least 6GB to have āgoodā performance, the 3GB version seems too limited to me (based only on my experience ).
Thanks for the answer @jeremy. Yes, I might be a problem of my configuration or setup, Iāll try to clean up all the used memory each time I run a cell
Love this thread. I just followed in yadās footsteps and put together my own machine. Incidentally I didnāt see his parts list prior to my build, but it looks like we arrived at the same conclusions. I wrote a quick blog post going over the process here:
@brendan thanks for the great writeup on building your own box. How do you find the training performance on this machine compared to a P2 instance on AWS?