p2.xlarge spot instances are great but the bills just stack up so quickly.
So, I’m currently thinking of going with this: https://pcpartpicker.com/list/73nfbj but, I’m hoping I could drop the total down to 1.5k if possible.
This $800 set-up is pretty tempting, but it’s not a 1080 Ti set-up – but faster than p2.xlarge.
Does anyone have a link to a blog post, or a pcpartpicker list of the cheapest 1080 Ti set-up?
@iNLyze You probably will not see this, but I am having issues with theano right now, and if you could explain how to use the version of VGG16 that is built into keras, that would be very helpful. Thanks in advance.
@jrmo, using the keras built-in version of VGG16 is possble, but you’ll be missing out on some of the improvements, which jeremy has made for the in-course version. Mainly, Jeremy added BatchNorm, which at the time of VGG16’s original publication wasn’t used yet.
You might also try to convert the weights of the Theano version to TF using a tool published by titu1994. I posted the link in another thread on this forum.
Did anyone build a deep learning rig recently? My plan is to exclude GPU for the moment and setup everything else.
I will buy GPU after a month or so when I have cash.
I am getting confused in CPU options available between Intel i7 7700K and AMD Ryzen7 1800X.
Both have pros and cons of their own.
I am currently running everything on a old HP z600 workstation, it has: 2 Intel Xeon E5606, a EVGA GTX 960 2gb, 8gb of ram, and a 128gb ssd. Running Ubuntu Gnome 16.04 (I kill gdm3 whenever I run anything intensive). It runs everything ok, but is definitely on the lower end. I’m thinking about upgrading, but its a good place to start for around $600-$700.
I’ve been a lurker of this forum for a while (i finished part 1, and parts of part 2 - some of this generative models / RNN lectures were just way too confusing… maybe I need to watch them again like i Idid with part 1 - by the way, @jeremy - your CNN lectures made more sense than 10+ books / articles / videos I saw in past year, thank you)
Now I’m on my journey to starting to make my own DNN apps, and I’m stuck at this making my server level, particularly sourcing . I tried AWS and some even other dedicated GPU hosting services… but they’re awfully expensive.
I used to work with a friend who imported electronics (mainly mobos, ram, HD, etc) directly from Chinese manufacturers so I started wondering… if I can get dedicated GPU hosting built for you guys for sub-$100 to maybe 150 per machine per mo, would that be of interest to you guys? I’m talking at least GTX 1080 / 1070… quite substantial power.
This is just a thought in process. I don’t own a datacenter or anything. The part I haven’t figured out was cost of electricity / real estate, but if there was enough interest, i guess I can look into it.
And how much RAM, CPU, and HD would you guy be needing?
This brings up an interesting idea: if you’re building your own deep learning rig but you’re not going to be using it all the time, you could possibly rent out your spare cycles to a fellow hobbyist for a lower fee than AWS etc. Make some money back that way.
One advantage over cloud solutions like AWS is that if you wanted to train on ImageNet, for example, and the person you’re renting computer time from already has the ImageNet data on their drive, you don’t have to download it yourself anymore.
There’s always the potential of abuse, so you’d probably want to restrict what this kind of user could do on your computer (and your home network etc).
I just completed a build centered around an ASUS X99 E 10G WS. Nice X99 Mobo with 8 PCIe slots (40 lanes) so can fit 3 GPU cards no problem. No problem booting a 6850K, but I did ultimately upgrade the UEFI BIOS (easy). Probably overkill on the CPU but price on 6850K very attractive now vs a year ago. Volta PCIe cards may be coming eventually but right now 1080Ti 11G probable sweet spot vs. Titan Xp. I decided to get 1x 1080Ti and then either upgrade to one or two more of whatever the volta equivalent of the Titan Xp is, depending on cores and how much DDR5 memory (16GB, please?). I can sell the 1080Ti and probably recoup 1/2 or better of the original purchase price, so will cost me about $300 for the year or so I will have it. I felt that was a better option than a Titan Xp, especially since I’m paying myself. Probably not worth waiting for the coffee lake CPU’s as DL is mostly GPU dependent and early MoBos may be buggy - that’s why I chose X99 instead of X299, etc… Reliability & compatibility are helpful here, one less thing to worry about.
P.S. This thread is excellent. Thanks to all posters for helping me with my build - I consulted here often.
I created a build that’s kind of based on a few pcpartpicker lists I’ve seen around. I’m a total noob at pc building, though, so I definitely could’ve made a mistake. Can someone look over it, and tell me if there are any issues I should fix? Thanks!
Ok, thanks. I switched the ram out for 2x8gb of Patriot Viper Elite DDR4-2133. The cpu cooler didn’t mention coming with thermal paste, so I’ll probably just buy this.
I’ve been training some custom models using tensorflow on my laptop, even when I’m using the built-in nvidia 960M it takes 10 to 24 hours to train a moderate size datasets. To speed things up I decided to build my own DL rig, here’s the pcpartpicker:
Unlike Slav, I plan on using 1 GPU (GTX 1080 TI) until I can justify the cost of buying and building a multiple GPU system; even then, if I need more than 1 GPU I think I will sell my old rig and buy a new one. Is it a good idea to use the i5-7500 which only has 16 PCIe lanes? Would this be a bottleneck for the 1080 TI and the NVMe m.2 ssd? (Assuming they share bandwidth)
Sorry if this was already answered in this thread. I read this thread for 3 hours but I couldn’t get 2/3 of the way through.
I don’t think PCIe bandwidth is worth worrying about if you’re using only 1 or 2 GPUs. You won’t be using the full bandwidth while training, since the GPU cannot do its convolution computations that fast anyway.
Slav did mention (and some citations) that people were experience bandwidth bottlenecks, here is a text from Slav’s blog:
Edit: As a few people have pointed out: “probably the biggest gotcha that is unique to DL/multi-GPU is to pay attention to the PCIe lanes supported by the CPU/motherboard” (by Andrej Karpathy). We want to have each GPU have 16 PCIe lanes so it eats data as fast as possible (16 GB/s for PCIe 3.0). This means that for two cards we need 32 PCIe lanes. However, the CPU I have picked has only 16 lanes. So 2 GPUs would run in 2x8 mode (instead of 2x16). This might be a bottleneck, leading to less than ideal utilization of the graphics cards. Thus a CPU with 40 lines is recommended. A good solution would be an Intel Xeon processor like the E5–1620 v4 ($300). Or if you want to splurge go for a higher end processor like the desktop i7–6850K ($590).
Following that logic, if you were to run 3 or 4 GPUs, you would not see linear improvements in speed on a single machine since you would be limited by PCIe bandwidth. Even with 40 lanes (Intel Xeon: https://ark.intel.com/products/92991/Intel-Xeon-Processor-E5-1620-v4-10M-Cache-3_50-GHz) it would run at 1x16, 3x8. So ideally, 40 lanes would be good for a 2 GPU system.
For the 16 pcie lane that I’m considering buying, since I’m only going to run 1 1080 TI on it, I am only concern if the 16 lane is shared with the NVMe drive and if that will be a bottleneck. I guess I got to do some more research.