Here are the components I chose.
It isn't CPU bound when training on GPU, but it will peg one core 100%. So single threaded performance does factor in a little. More importantly is the bus, using PCI Express Gen 3 is 10-15% improvement over the same card, and using a faster CPU can be anther 10-15%.
For example, I had a 4.3GHz overclocked Ivy Bridge (Intel 3770K) with an nVidia 1070 and would get around 324s times training lesson 1 first fit. When I upgraded to a Kaby Lake 7700k using the same GPU but faster CPU and Gen 3 PCI Express, my times dropped nearly 30% to 229s.
When I train on GPU I can see one thread at 100% usage. So CPU does factor in a significant amount with GPU training but you will already be running quite fast, it's just the difference between fast and faster and no where near as important as the GPU in general.
Good choices, I would recommend going Kaby Lake though and if you can just spend the extra $100 on the 7700K. You can also get a Z270 which is the newer Intel chipset for near the same money.
I'd also recommend swapping out the Power Supply for an EVGA Gold G2, it's about $10 less but it is a much better Power supply and has a $20 rebate (just got one myself).
Good call. I'll take a look.
What are your training times per epoch for cats and dogs on this configuration?
For lesson one, it is 229s to do one epoch on the first fit. On AWS P2 it is around 650s.I am using the following:
Kaby Lake 7700KMSI Z270 M5MSI nVidia 1070 8GB Gamer X32GB Ram
@jeremy I want to suggest that you do this people a mail so we can have gpu access, since its for learning purpose.
I'm running two different Linux servers but am not achieving the same speeds as Jeremy does in his examples.
One is a newer machine that came equipped with a 1060, the other is an ancient (was running Vista) machine that I gave a used 1070 (like $20 from eBay). Both are running Ubuntu 16.04, but both run the notebooks slower than the video.
Are you getting speeds comparable to the speeds Jeremy is getting in his videos?
I didn't pay attention to the times Jeremy got. I just know the times I get.
But the 1070 will be slower on a gen 2 pci express port or a slower CPU. Will still be a lot faster than AWS p2 instance but won't run at full speed unless on a x16 PCI Express 3 port and Kaby lake CPU.
Looks like a decent rig, few suggestions: Get a platinum+ power supply, I like seasonic for performance vs price. It'll pay you back for itself eventually, no reason not to get. Things to note about power supplies is they deliver their most efficient power @ a certain range of wattage, check out some spec sheets. I'm not familiar with that fan, but I'd get a noctua if I was on air almost period. They're silent, very effective, 6yr warranty, and generally ugly and brown. I'd get the 7700k, you want the extra PCI lanes, and various other goodies. I'd get almost another other branded card besides zotac. Asus is the best with the 1080 imo. I might also hold off looking for what the prices are going to be for the 1080ti if I was buying now.
@dradientgescent: I can confirm this observation. I got about factor of 2x (or a bit more) performance with the 1070 compared to p2.
I built my DL rig on top of the "11 Watt PC" proposed byheise.de - sorry, it is written in German. When configured correctly this computer is completely inaudible. I modified this by changing the RAM to 32GB, power supply to the be-quiet 600W model and the CPU to the i5-6500. And I integrated a GTX 1070 for DL. I run ubuntu 16.04 and tensorflow rc10. I went through a lot of painstaking work by hand-compiling TF to get it to support my 1070, but thankfully this isn't necessary anymore. Follow this guide to install TF using pip:Install Tensorflow
Oh, and if you, like me use TF, the vgg16 model from the lessons doesn't work, because the weights file expects Theano dim_ordering. So, I am using the keras in-built VGG16 application.
@lin.crampthon Do you have any tutorial about build a home system and log in it remotely? Thanks in advance!
Building the server isn't all that difficult. If you haven't build a PC before, check out reddit.com/r/buildapc, they are really helpful for new builders. There are also great Youtube videos on it I'm sure.
Logging in remotely, you have two things. SSH and Notebook. Just look up a tutorial for setting up SSH and then you just need to setup Notebook. I would recommend using strong password on the notebook if you are going to pass it to the internet, but even better I would recommend just exposing SSH and passing through SSH to get the notebook. A bit more complicated, but far more secure as I have no idea what type of vulnerabilities are in Jupyter notebook and highly doubt it has a security focused codebase.
For software, you want to use Anaconda, it will install optimized versions of most of the python libraries you need. It's is the best supported free scientific python platform available.
This is how I've set up a low-cost server. There's a thousand ways to do something like this, but this is what worked for me. Hope this helps ..
1) Hardware/Software setupa) Purchased used Nvidia 970 (eBay,craigslist) with working on-board fan.b) Found a non-utilized tower PC with open double PCIe slot Machine needs LOTS of RAM Power supply needs to be able to handle the load from the Nvidia card (like 200W additional). I think a 650W power supply is recommended c) Installed ubuntu 16.04 on machine and allocated LOTS of swap space.d) Software setup There is an official-ish script that is supposed to set up a machine that was provided in the class, install-gpu.sh, available in the course github at https://github.com/fastai/courses/blob/master/setup/install-gpu.sh.
I haven't used this script to set up a machine (because it wasn't available when I first set up a Nvidia server), I used the method described in:
e) Remote execution -- Running remote JuPyteR notebook:As a guide, I used info from:
for example, my username is lin and when I run the jupyter remotely on the Nvidia server named "fuerte" from my laptop named "portatil", I use the following commands ... and a JuPyteR notebook comes up in a browerser on my laptop (portatil) that is using the compute resources from my remote Nvidida machine (fuerte)
portatil > ssh -l lin fuerte fuerte > jupyter notebook --no-browser --port=8888 portatil > ssh -NL 8888:localhost:8888 lin@fuerte
Let me know if you want me to go into more detail on anything
@lin.crampton I'm curious. Could you define "LOTS" of RAM and swap?
I'm hoping to use a laptop with 16GB RAM / 8 vCPU / (1) GTX 960M and/or a workstation with 64GB RAM / 32 vCPU / (2) GTX 1070.
I'm confident the workstation will fit the bill hardware wise, not so sure about the laptop.
your 2 card 1070 system should more than fit the bill. much better than what i have.
i don't think the RAM on your machine is as important as the RAM on the Nvidia card.
i mentioned lots of swap because the first one i set up i used the default swap suggested by ubuntu install. i ended up reallocating swap space; would have been easier to do it from the beginning.
i set up these Nvidia servers on the cheap -- bought used Nvidia 970 card with working fan (between $15 and $40) and stuck it in a machine not currently being used. i wanted to see if i could set up a working deep-learning system with very little cost.
i went with a tower CPU because it was the easiest and cheapest solution at the time. the procedure to attach an Nvidia card to a laptop is more intensive. when I looked in to it, i was going to have to buy a docking station for the Nvidia card if i wanted to use the Nvidia card with my laptop, and have a cord/hardware hanging off from my laptop.
i log in remotely because i want to make sure that the machine is not trying to access the GPU for video processing for the display.
I would highly recommend using SSH to tunnel Jupyter Notebook. It provides full access to the machine, and a simple password on a non-security focused code base is a recipe for disaster.
There are lots of tutorials on tunneling with SSH, but basically, it means you only have ssh opened to the Internet and you use that to securely tunnel to the machine to connect to other local services.
This makes it a little more difficult to use if you are using notebooks via an iPhone but any desktop will easily be able to handle the tunneling and make it painless once setup.
Thanks for the links and suggestions guys.
I'm working my way through this right now, hoping to get it up and running by the end of the weekend.
I'll let you know how it goes. : )
If you are considering using multiple GPUs in the future then its important to think about this in your Motherboard / CPU selection.
You should looking CPUs that support sufficient PCIe channels running at full x16 for each GPU....the best support seems to be the LGA2011 socket chips many of which support 40 PCIe https://en.wikipedia.org/wiki/LGA_2011. The intel 5930 is a good choice here if you can find one with an x99 motherboard.Note the X99 mobos also tend to be slightly better supported in linux than new shiny shiny chipsets which can take forever to find drivers for.This can be expensive but will allow you to expand your system in the future.
Having said that you can do a lot with old kit
Your GPU card is most important followed by your memory/disk then CPUSo drop the money on the graphics card.
For comparison I have set up a couple of ubuntu serversOne using an I7 950 (this is an 8 year old CPU), 6 gig of ddr3 ram and Nvidia 1080 The other using a kabylake I7 7700, 16gig of ddr4 ram and Nvidia 1080
Both systems outperform an Amazon P2 instance by a significant margin, The slowest is roughly 1.5 - 2 times fasterThe new CPU - ram (motherboad combo) is about 20% fater than the 8 year old cpu showing that it is not all down to the GPU.