Personal DL box

I have 2x16GB DDR4 3000MHz RAM. It’s one area I wish I had more, but it costs. I will soon be adding a second GPU and hope to avoid buying more memory, but that is more hope than expectation. For all the DL-box threads online, many don’t emphasise the non-GPU aspects of memory and disk size and speed required.

I asked because if you are able to load the entire dataset in ram (it’s done automatically if you use a bcolz array, for example), disk speed doesn’t matter anymore.

However, consider that for DL tasks, ram speed doesn’t matter: go for the cheapest.

Currently waiting to buy my DL box.
I don’t know what is better, get one RTX2080 (850€) or one GTX1080 (~500€) knowing that for 150€ more than one RTX2080, I could buy 2 GTX1080 (is working with 2 GPU a real benefit to play with deep learning?).
RTX2070 could be a good bet but still no release date.

I second that approach (waiting :slight_smile: ) and question. one difference will be in memory size, which may be important for large data sets, but that difference is not huge…

possibly prices of 1080s will drop when the 2080s are released…

Hi @radek,

I built my own DL machine, to setup the course requirement I’m using your scripts I see that you are using cuda9. I’ve installed CUDA 10 as part of my installation. Is it required to have cuda 9 instead of cuda 10? Also I see that cuda-tool-kit 9 is being isntalled as part of conda installation.

cudatoolkit-9. 100% |##############################################################################################| Time: 0:03:46 1.63 MB/s

If I have to change it to work for CUDA 10, where should I change to install the cudatoolkit-10? Infact I already have the cudatoolkit-10 installed. Do you see any problems with this setup?

You can install both Cuda 10 and 9.x on the same system. See writeup here:

With that said, currently it is more straightforward to build PyTorch with Cuda 9.2 than 10.0

1 Like

Thank you @redturtle

Here is my simple 5 step guide.

Hope somebody will find it useful.

Hi @redturtle @radek,

I wanted to follow the steps so, instead of running your script, I ran most of the instructions manually to get to know what all the dependencies is being resolved.

In the script I see that instead of doing “conda env update” you are suggesting to install pytorch by git clone. I followed the same procedure, everything works fine.

Now if I got to fastai’s git root directory and try to run the “conda evn update” it trys to install the pytorch again.

 conda env update
Solving environment: done

Downloading and Extracting Packages
pytorch-0.3.1        | 486.5 MB  | 1                                                                                                   |   0% pytorch-0.3.1        | 486.5 MB  | 1                                                                                                   |   0% 

How to inform the anaconda to stop looking for pytorch? I tried multiple times but it takes really long to download and most of the times http connection gets timedout and hence couldn’t install pytorch using “conda install pytorch” or “conda env update” .

Now that I’ve the working pytorch, I don’t want anaconda to look for pytorch again. How to disable that?

@redturtle,
I didn’t face any problem what so ever, while building pytorch using cuda 10.0

I should have worded this as “you will have to build PyTorch from source if you use cuda 10.0, whereas you can use the stable conda package with cuda 9.2.”

I also didn’t have any issues building PyTorch 1.0 with cuda 10.0, but using conda with 9.2 is easier and probably ok unless there are features in 10.0 that you need (eg tensor cores for turing).

1 Like

Thank you for your reply. Now I get that. Any inputs on how to inform/configure conda to use installed pytorch rather than trying to download one, when I do “conda env update” ?

Hi All,

I got my system up and running. Here is my blog post which may help others about how to setup the machine, my h/w details. How to access it from outside world.

Hey everybody! This is my first time posting!

I’m in between building a desktop or just buying one.
After reading Tim Dettmers post (http://timdettmers.com/2018/11/05/which-gpu-for-deep-learning), I think I want to go with the RTX 2070

I was looking at these two options:
$1500 right now

$1600 right now. (intel cpu instead of amd cpu)

It looks like buying may be a little bit more expensive, but pretty close.
Are there red flags I should look out for?

Yes. You should buy more ram. You should also buy a card with more VRAM. Also, be sure the mb has room and slots for at least 2 cards in 8x/8x.

Right now, I’d buy a 1080ti rather than a 2070.

It would be better to have a NVMe ssd.

All the rest is fine.

Wow! Thank you so much! I’m going for the build!

Hi all
I started planning for a new setup to be ready since my current one will be probably a limiting factor even for learning. My goal is prototyping in the image recognition sector.
Done the configuration (Ryzen 2600X CPU, B450 mobo, 3200Mhz RAM, 750W Gold PSU to enable future 2xGPU, Fractal Design case) my concern is investing more on RAM or on GPU. Budget about $1200.
For about the same cost I can update the configuration from 16GB to 32GB RAM, or from GTX 1070 to 1070 Ti.
Wondering what can be the better choice in your experience.

Just to let know I finally settled for 32GB RAM and GTX 1070 GPU. Will see how is working.

1 Like

I decided to water-cool my computers and have written a few notes if anyone else is looking at doing similar - useful if you have open-air GPU(s) and want to put two or more in a box.

2 Likes

guys i have been using a macbook pro 2011 version.
For the past 2-3 years for my machine learning purposes but recently i started doing some heavy DL computational problems and so uptill now i have been using aws but now i need a personal DL box so i was wondering if Mac of 6 cores would be a good buy.
But the problem is that i have heard that some libraries like pytorch uses cuda at backend which is not supported by Mac OS.
So please tell me if i should buy a mac or a nvidia laptop(whose main purpose is for ML and DL tasks).

I hve been using mac for thw past 10 years so my main preference would be buy a mac

I can’t speak for laptops (they’re generally worse) but on an 8core (16 thread) Xeon with 16GB of RAM the first notebook from DL1v3 course runs at least 10 times slower when just using the CPU compared to a GPU (1070ti + 8GB in my case)

I would say go with a laptop which has at least a 1060 or 2060 GPU onboard otherwise it will be unusable for anything non-trivial even with an i9 MBP with 6cores and for the money you’d pay for it, you can get a loaded Lenovo Legion or similar “gaming laptop” which should work for DL type work as well.

I have never used a Legion or other “gaming laptop” so YMMV, but just ballparking it, I would say don’t get the MBP thinking it will help in DL problems just because it has 6 cores. I use my 2011 MBA to connect to my DL box and that’s good enough for me. You may want to look at a hybrid solution like that where you have your own DL desktop that you can remotely connect to. I find the setup cheaper and more flexible than going full-cloud.