Making your own server

Hi @radek, I followed your cuda8 script to install stuff in my local machine. Everything goes well but when using browser to access localhost:8888, there is no anacuda kernal listed in jupyter. Do you have any idea why?

Screenshot

Console

❯ which python
/home/finxxi/anaconda3/bin/python
                                                                                                                                                                                              
~
❯ which jupyter
/home/finxxi/anaconda3/bin/jupyter
                                                                                                                                                                                              
~
❯ which pip
/home/finxxi/anaconda3/bin/pip
                                                                                                                                                                                              
~
❯ jupyter notebook
[W 23:52:45.652 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 23:52:45.676 NotebookApp] JupyterLab alpha preview extension loaded from /home/finxxi/anaconda3/lib/python3.6/site-packages/jupyterlab
JupyterLab v0.27.0
Known labextensions:
[I 23:52:45.677 NotebookApp] Running the core application with no additional extensions or settings
[I 23:52:45.680 NotebookApp] Serving notebooks from local directory: /home/finxxi
[I 23:52:45.680 NotebookApp] 0 active kernels 
[I 23:52:45.680 NotebookApp] The Jupyter Notebook is running at: http://[all ip addresses on your system]:8888/
[I 23:52:45.680 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).

I think everything is okay :slight_smile: The Python 3 kernel should be the python environment provided by Anaconda.

1 Like

ahh really XD…? This drove me crazy so far. Thanks!

1 Like

@radek the install script for cuda8 doesn’t install cuDNN. Is that intentional ?

Yes, it is :slight_smile: Pytorch devs were kind enough to provide all the necessary binaries (including cuDNN) when we install using conda - I think this is really great :slight_smile: This gives a much greater sense of confidence that everything is set up properly on our box than compiling all this by hand.

It would be cool to create some sort of common speed benchmark to test our configurations. To account for different GPU memory size we can vary batch_size. The latest Keras (2.0.9) makes multi-GPU training easy so we can test things like 2x1070 vs 1080ti. The CPU/HDD can also be tested by using CPU intensive augmentation strategies.
We can compare:

  • pure GPU speed (time spent on some common model/dataset per one epoch)
  • GPU/CPU speed (time spent on heavily augmented data located in memory)
  • GPU/CPU/HDD speed(time spent on heavily augmented images located on HDD)
  • Multi-GPU vs. more powerful single GPU.
    We can pick some standard dataset like CIFAR100 that comes with Keras.

Great idea, was thinking the same. And i believe the more novice portion of the ML community (such as myself) would benefit from such a benchmark, to know if we’ve configured our libraries/drivers correctly…

Anyone more knowledgeable has any idea how should we start on this topic?

Tensorflow.org has a methodology specified for drawing their own benchmarks , and they have provided the scripts they have used. That can be a starting point.

1 Like

Note: The vgg16 model put forward in the course added batch normalization to the model, which wasn’t available in the original VGG16. Also, there are better options for GPUs available now, obviously, such as the 1080 Ti or the new 1070 Ti cards. The next genration Volta GPU, which has “tensor units” optimized for operations needed by cuDNN is currently only available in pro-level Tesla cards. Apparently, NVIDIA likes to sell off the Pascal GPUs for as long as possible until the large Volta dies can be produced with satisfactory yield.

Just posted my completed build to PcPartPicker (Intel 8700K, 1080Ti, Intel Optane 900P 480GB XPoint SSD).

Small, Silent, Powerful Machine Learning System

Hi Guys,

I have written my first blog post summarizing my learning’s that i have found while building my DL machine.
Please do Check it out: Choosing Components for Personal Deep Learning Machine

Feedback and Suggestions are Welcome.

Regards,
Gokkul

Hello @ gokkulnath, it is great post. very detailed. I can see a lot of effort went into it.

Can you also share the part of installtion of softwars and libraries.

Regards,
Irshad

Thanks for Reading. I have just installed Windows 10 and Ubuntu with drivers and then I used the setup scripts provided in the Fast.ai Github repo
If you still want a detailed version check out these posts :
Build your own top-spec remote-access Machine Learning rig
Setting up a Deep learning machine in a lazy yet quick way

question: /install-gpu-part1-v2-cuda8.sh does not install theano, which is at least talked and used in the first lesson.
shouldn’t we install it? @radek

Yes, this is on purpose. The script is supposed to be for part 1 v2 where we only use PyTorch - added keras with tensorflow as an extra. Also, I am not sure for how much longer Theano will be maintained (or what the status of the project is atm) nor whether the objections to using TF from over a year ago still hold - TF is being actively developed and I would imagine a lot has changed since then and likely will change especially as I believe that the dynamic computational graph functionality has been announced that it will be worked on. Might be that the reasons we were unable to use TF easily for RNNs in part 1 v1 are not valid anymore.

Well, lots of ‘ifs’ here :slight_smile: Sorry I do not have a concrete answer, but likely most if not the entire part 1 v1 can be completed on a TF backend and I would recommend going that route. Otherwise, you could try modifying the script to use Theano or one other possibility would be to comment out the part where TF is installed and install Theano after the install completes manually.

got it. needed your thought behind this, and well explained :slight_smile:

1 Like

I calculated the electricity bill using this formula

watts = 600
price_per_kwh = 0.12
cost_per_day = watts/1000.0 * 24 * price_per_kwh
print(cost_per_day)
print(“cost per month max”, cost_per_day*30)

Comes around 51 max and 20 on an average, ty.

1 Like

Hi all! As I had promised earlier in this thread, I’ve put together a more detailed blog post on the mATX deep learning rig I spec’d, bought, assembled, configured, and benchmarked. You can find that blog post linked here.

I would really love any feedback you might have on the build and post, as I’m always trying to learn and improve. (and I’d also love to hear any suggestions for how I might continue to put this little guy to work after I run through my current backlog of personal projects!)

Lastly, I just wanted to express my sincere, immense gratitude for all of the posts that come before me in this thread – you’ve been incredibly helpful in guiding my thoughts, and to get me to this point of contentment!

Cheers,
Matt

1 Like

Hi, any thoughts on following machine available at costco

It has 1070 8 GB graphics
32 GB RAM
and intel core i7

Will it be good to

  1. Participate with decent chance of getting ranking at kaggle for competitions like https://www.kaggle.com/c/cdiscount-image-classification-challenge
  2. Future proof for 2 years.

or should I go for https://www.costco.com/iBUYPOWER-C-i20-Gaming-Desktop---Intel-Core-i7-8700K---8GB-NVIDIA-GTX-1080-Graphics.product.100383125.html

1 Like

Hi fellow,

I just wrote a blog post about how GPUs help with deep learning. The post also includes parts of Jeremy’s lesson 3 lecture. Could you guys please go through it and review?

Thanks :slight_smile:

1 Like