Making your own server

I am working on a project that aims to solve home inventory evaluations (that is to say what is your stuff worth helping with filing homeowners insurance claims, estate planning, etc.) in part with a neural net.

Taking the initiative, a colleague and I are going to build our own servers since AWS has been a real stumbling block. My coworker (assuming this is the same buddy you refer to in the previous sentence? If so, carry whatever language you initially use to describe him to be consistent) is taking a different class and combined we are going to have the following stack

ubuntu
anaconda
TensorFlow
Thenaos
Keras
and ROS ( https://en.wikipedia.org/wiki/Robot_Operating_System ).

We are using NVDIda 1080 from asus.

Has anyone tried to do it and/or have suggestions or feedback? Any resources?

We are going to follow this website ( http://deeplearningathome.com/2016/09/Building-PC-For-Deep-Learning-Hardware.html ) and any scripts and tips along the way I will put into the wiki.

-Arthur

11 Likes

Pretty cool! cc: I think @lin.crampton has done something similar at a desktop or server level?

Is your coworker in the Udacity self-driving car nano degree program? I’m currently enrolled in that, too :slight_smile:

Just curious what are the major pain points with AWS?

AWS is very over-priced for GPUs. You can buy a GTX 1070 for around $300 that gives better performance than the AWS P2’s GPU. So I think it’s a good idea to build your own deep learning machine if you can.

9 Likes

It is definitely possible to put together a home system – mine is Ubuntu based. I log into it remotely, interacting with jupyter from another machine to minimize resource usage on the GPU server.

NVidia has a GPU grant program for academics – not students, but PIs ( https://developer.nvidia.com/academic_gpu_seeding). I’m not eligible, but I’m working on inspiring a PI to write a grant and let me put together a machine for them. Will keep you posted.

2 Likes

The pain with aws was getting registered (still not after a long support chain), followed by the constant vigilance of turning the machine off.

It only makes moderate economic sense. At some level this a gym membership where I have to promise myself that I use the server quite a bit to justify - basically a few months worth. There isn’t that much difference between weight training and training weights from a cash perspective.

The coworker is in the udacity program and is pretty excited about it.

-Arthur

5 Likes

@arthurconner if you ask @datainstitute for help on Slack they should be able to get your AWS set up, if you’re still interested in doing that.

@lin.crampton frankly I’m not sure that program is worth it at the moment, since they don’t give out Pascal-based cards; also, all they provide is the card, not the rest of the server. Pascal cards are such a big step in performance, and (if you get a 1070 or even a 1080) not a huge chunk of the cost of the server - so I’m not sure how much benefit it provides to get their grant…

I set up my own Ubuntu 16.4 machine with a GTX 1060 -6GB (i wish i had gotten the 1070 with 8GB since it ran out of memory on the first lesson)

After installing Ubuntu my setup was roughly the following:

Install CUDA 8.0 and cuDNN

https://developer.nvidia.com/cuda-downloads
https://developer.nvidia.com/cudnn ( You will need to register)

Anaconda and Python

sudo apt install unzip
wget https://repo.continuum.io/archive/Anaconda2-4.2.0-Linux-x86_64.sh
bash Anaconda2-4.2.0-Linux-x86_64.sh
conda create -n fastai34 python=3.4
source activate fastai34;

conda install matplotlib
conda install cloudpickle
conda install opencv
conda install pandas
conda install bcolz
conda install scikit-learn
conda install theano
conda install keras
conda install jupyter

#switch keras to user theano
.keras/keras.json ->

{
“image_dim_ordering”: “th”,
“epsilon”: 1e-07,
“floatx”: “float32”,
“backend”: “theano”
}

#create .theanorc ->

[global]
floatX = float32
device = gpu0

[nvcc]
fastmath = True

[cuda]
root=/usr/local/cuda/

mkdir lesson1
cd lesson1/
wget http://www.platform.ai/files/nbs/lesson1.ipynb
wget http://www.platform.ai/files/nbs/utils.zip
wget http://www.platform.ai/files/nbs/vgg16.zip
wget http://www.platform.ai/files/dogscats.zip
unzip -q vgg16.zip
unzip -q utils.zip
unzip dogscats.zip

modify the lesson1 notebook
“from imp import reload” above “import utils; reload(utils)”

#On login, switch to your fastai34 env
source activate fastai34;

#Run jupyter notebook remotely
ssh -L 8888:127.0.0.1:8888 <machine address >
jupyter notebook --no-browser &

20 Likes

Awesome. What sort of performance are you getting compared to the AWS basic GPU?

What’s the best way to benchmark the performance?

In the first lesson it took 318 seconds to fine-tune with batch_size=32.
It ran out of memory with batch_size=64

Does anyone have a recommendation on what kind of motherboard I should get if I want to be able to use multiple GPUs (2 for now)?

And is there a specific version of the 1070 I should try to get?

Thanks.

That sounds reasonable. You could compare runtimes / memory issues with our AWS GPUs since we know those are working for most use cases in this course.

@davecg
most builds I’ve seen use Gigabyte GA-X99-Ultra which would allow you to use up to 4 gpus…
A couple of interesting builds:
http://pjreddie.com/darknet/hardware-guide/
https://www.facebook.com/notes/chris-lengerich/build-your-own-nvidia-devbox/10152999419281541/

2 Likes

MSI and EVGA are the most popular due to performance and warranty. I am a big fan of MSI but EVGA has an excellent warranty.

1 Like

I get 229s with my 1070 on lesson one’s first fit. On AWS P2 I believe it is around 650s.

2 Likes

Setting up your own machine is really easy. Even a 1060 (sub $200) will blow the doors of AWS P2 instance and will be no monthly costs. The 1070 is the sweet spot but will go for around $400. I highly recommend using Linux as Windows is always fighting against the grain when installing dependencies and linux just works faster and smoother. Especially Jupyter notebooks, the difference is huge.

2 Likes

Do I need a powerful CPU to set up a server? Or CPU is not doing any work when training with CUDA?

Consensus is that the CPU doesn’t need to be that powerful, as long as you can support enough processes per GPU (and depending on what kind of preprocessing you need to do).

I’ve seen the Intel 6700 in a bunch of builds so that seems reasonable (and you could probably get by with less if necessary).

1 Like