Making your own server

Hi,

p2.xlarge spot instances are great but the bills just stack up so quickly.
So, I’m currently thinking of going with this: https://pcpartpicker.com/list/73nfbj but, I’m hoping I could drop the total down to 1.5k if possible.
This $800 set-up is pretty tempting, but it’s not a 1080 Ti set-up – but faster than p2.xlarge.

Does anyone have a link to a blog post, or a pcpartpicker list of the cheapest 1080 Ti set-up?

Any help/input would be greatly appreciated.

PS. Apologies for reviving this old thread.

@iNLyze You probably will not see this, but I am having issues with theano right now, and if you could explain how to use the version of VGG16 that is built into keras, that would be very helpful. Thanks in advance.

@jrmo, using the keras built-in version of VGG16 is possble, but you’ll be missing out on some of the improvements, which jeremy has made for the in-course version. Mainly, Jeremy added BatchNorm, which at the time of VGG16’s original publication wasn’t used yet.
You might also try to convert the weights of the Theano version to TF using a tool published by titu1994. I posted the link in another thread on this forum.

I’ve recently built my own which is similar to what you’ve got there.

Options to drop to 1.5k:

  • use an i5 instead of i7
  • use a Corsair H60 water cooler or even cheaper is an air cooler
  • use DDR4-2400 instead of 3200 memory
  • use Skylake parts instead of Kaby Lake, basically this means using 2016 hardware instead of 2017, but the difference in performance is negligible

Did anyone build a deep learning rig recently? My plan is to exclude GPU for the moment and setup everything else.
I will buy GPU after a month or so when I have cash.
I am getting confused in CPU options available between Intel i7 7700K and AMD Ryzen7 1800X.
Both have pros and cons of their own.

Intel-i7 7700k => 4.2 GHz-OC 4.5 GHz-4 cores-8 threads-Price Nearly INR 30k

AMD Ryzen7 1800x => 3.6 GHz-OC 3.9 GHz-8 cores-16 threads-Price Nearly INR 40k

Will having 8 cores instead of 4 help substantially in deep learning/machine learning tasks considering the lesser market-share and maturity of R7?

Could anyone please help me with this.
Cheers.

I am currently running everything on a old HP z600 workstation, it has: 2 Intel Xeon E5606, a EVGA GTX 960 2gb, 8gb of ram, and a 128gb ssd. Running Ubuntu Gnome 16.04 (I kill gdm3 whenever I run anything intensive). It runs everything ok, but is definitely on the lower end. I’m thinking about upgrading, but its a good place to start for around $600-$700.

Hey guys.

I’ve been a lurker of this forum for a while (i finished part 1, and parts of part 2 - some of this generative models / RNN lectures were just way too confusing… maybe I need to watch them again like i Idid with part 1 - by the way, @jeremy - your CNN lectures made more sense than 10+ books / articles / videos I saw in past year, thank you)

Now I’m on my journey to starting to make my own DNN apps, and I’m stuck at this making my server level, particularly sourcing . I tried AWS and some even other dedicated GPU hosting services… but they’re awfully expensive.

I used to work with a friend who imported electronics (mainly mobos, ram, HD, etc) directly from Chinese manufacturers so I started wondering… if I can get dedicated GPU hosting built for you guys for sub-$100 to maybe 150 per machine per mo, would that be of interest to you guys? I’m talking at least GTX 1080 / 1070… quite substantial power.

This is just a thought in process. I don’t own a datacenter or anything. The part I haven’t figured out was cost of electricity / real estate, but if there was enough interest, i guess I can look into it.

And how much RAM, CPU, and HD would you guy be needing?

This brings up an interesting idea: if you’re building your own deep learning rig but you’re not going to be using it all the time, you could possibly rent out your spare cycles to a fellow hobbyist for a lower fee than AWS etc. Make some money back that way.

Yeah, i thought about that. Almost like SETI @ home… but for GPU / DNN

Only issue i can think of is

  1. disk space - would you be willing to let a 3rd party use 100+ gigs at a time?
  2. cost of electricity - i guess u’d have to somehow calculate that and include it in your pricing

there’s milliiosn of GPUs (that gamers bought) that can be leveraged around the globe for deep learning / data scientists.

would love to collaborate with people on this if this is interesting

One advantage over cloud solutions like AWS is that if you wanted to train on ImageNet, for example, and the person you’re renting computer time from already has the ImageNet data on their drive, you don’t have to download it yourself anymore.

There’s always the potential of abuse, so you’d probably want to restrict what this kind of user could do on your computer (and your home network etc).

Issue would be 1) privacy (perhaps private data) and 2) security (ex. remote code execution via pickle).

if anything, data / code would have to be sandboxed from the host / other prokects.

Hi @taewoo ,
I am in the same boat as you and would like to discuss partnering in this venture if you are open.

@sjt dm me. Or email. My username @Gmail. Com

I just completed a build centered around an ASUS X99 E 10G WS. Nice X99 Mobo with 8 PCIe slots (40 lanes) so can fit 3 GPU cards no problem. No problem booting a 6850K, but I did ultimately upgrade the UEFI BIOS (easy). Probably overkill on the CPU but price on 6850K very attractive now vs a year ago. Volta PCIe cards may be coming eventually but right now 1080Ti 11G probable sweet spot vs. Titan Xp. I decided to get 1x 1080Ti and then either upgrade to one or two more of whatever the volta equivalent of the Titan Xp is, depending on cores and how much DDR5 memory (16GB, please?). I can sell the 1080Ti and probably recoup 1/2 or better of the original purchase price, so will cost me about $300 for the year or so I will have it. I felt that was a better option than a Titan Xp, especially since I’m paying myself. Probably not worth waiting for the coffee lake CPU’s as DL is mostly GPU dependent and early MoBos may be buggy - that’s why I chose X99 instead of X299, etc… Reliability & compatibility are helpful here, one less thing to worry about.

P.S. This thread is excellent. Thanks to all posters for helping me with my build - I consulted here often.

I created a build that’s kind of based on a few pcpartpicker lists I’ve seen around. I’m a total noob at pc building, though, so I definitely could’ve made a mistake. Can someone look over it, and tell me if there are any issues I should fix? Thanks!

https://pcpartpicker.com/user/supermdguy/saved/PNMpbv

Looks like a good build to me. Personally I’d get DDR4-2400 instead of 3000 and save a few bucks.

I’m not sure if the CPU cooler comes with thermal paste; if not, you should get some. The thermal paste goes between the CPU and the cooler.

2 Likes

Ok, thanks. I switched the ram out for 2x8gb of Patriot Viper Elite DDR4-2133. The cpu cooler didn’t mention coming with thermal paste, so I’ll probably just buy this.

Hi everyone,

I’ve been training some custom models using tensorflow on my laptop, even when I’m using the built-in nvidia 960M it takes 10 to 24 hours to train a moderate size datasets. To speed things up I decided to build my own DL rig, here’s the pcpartpicker:

https://pcpartpicker.com/list/N84Kjc

The parts were picked following Slavv’s guide: https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415

Unlike Slav, I plan on using 1 GPU (GTX 1080 TI) until I can justify the cost of buying and building a multiple GPU system; even then, if I need more than 1 GPU I think I will sell my old rig and buy a new one. Is it a good idea to use the i5-7500 which only has 16 PCIe lanes? Would this be a bottleneck for the 1080 TI and the NVMe m.2 ssd? (Assuming they share bandwidth)

Sorry if this was already answered in this thread. I read this thread for 3 hours but I couldn’t get 2/3 of the way through.

Thanks in advanced.

Dat

I don’t think PCIe bandwidth is worth worrying about if you’re using only 1 or 2 GPUs. You won’t be using the full bandwidth while training, since the GPU cannot do its convolution computations that fast anyway.

Slav did mention (and some citations) that people were experience bandwidth bottlenecks, here is a text from Slav’s blog:

Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended.
A good solution would be an Intel Xeon processor like the E5–1620 v4 ($300). Or if you want to splurge go for a higher end processor like the desktop i7–6850K ($590).

Following that logic, if you were to run 3 or 4 GPUs, you would not see linear improvements in speed on a single machine since you would be limited by PCIe bandwidth. Even with 40 lanes (Intel Xeon: https://ark.intel.com/products/92991/Intel-Xeon-Processor-E5-1620-v4-10M-Cache-3_50-GHz) it would run at 1x16, 3x8. So ideally, 40 lanes would be good for a 2 GPU system.

For the 16 pcie lane that I’m considering buying, since I’m only going to run 1 1080 TI on it, I am only concern if the 16 lane is shared with the NVMe drive and if that will be a bottleneck. I guess I got to do some more research.

EDIT: I think I found the answer, http://www.tomshardware.com/answers/id-2943191/nvme-ssd-affect-gpu.html
This means my NVMe SSD won’t be a bottleneck for my single GPU for a 16 pcie lane CPU. However, if I plan on using multiple GPUs then I would need a CPU with more than 16 pcie lanes, otherwise it’ll go into the 2x8 (for 2 GPUs) or 1x8 and 2x4 (for 3 cards). Reference -> https://www.pugetsystems.com/labs/articles/Z270-H270-Q270-Q250-B250---What-is-the-Difference-876/