Personal DL box


(Jeremy Howard) #80

@lymitshn I wouldn’t suggest using your own box for learning this course - better to use the fast.ai AMI on AWS, Paperspace, or Crestle. Once you’re comfortable with the basic techniques then you can come back to getting your own machine working. You’ll know enough at that point to understand how to debug your issues and ask for help in a way that we can be useful to you.


#81

Thank you for suggestion but I really want to run my local machine.
I created a new env and verified and torch uses GPU also added 6G swap and it seemed to work this time. Ran until hitting %12 (slowly…) but it was using only 800 MB VRAM and %16 GPU Power at peak and after consumed all DRAM and swap, kernel restarted itself.
It can clearly access GPU but still tries to use high DRAM is this how the model supposed to work? Or is something wrong with my setup?


#82

Hi! I ran into the same issue. solution was just given on Wiki: Lesson 1. It was not a setup issue (at least in my case), but reducing the number of workers was necessary for loading/transforming the data as this part is done on CPU/main RAM:
data = ImageClassifierData.from_paths(......, num_workers=1)


(Nikhil B ) #83

Regarding a personal DL box, I’m seeing some Presidents day deals coming with a Windows and dual-gpu setup (2 cards of 1050ti or 1070 ). The big price increase of 1080ti since last Nov hasn’t helped.

I’m weighing the benefit of having 2 cards to run to experiments versus a faster 1080ti. The main use-case is being ready for part-2 of this course and for my own learning.

Does anyone have such a setup/ know about the pros and cons ?
I’m wondering if I will have to do a dual-OS install for the machine since the box comes with Windows and if I’ll be able to access both cards fully.


#84

i have 2x 1080ti, in a dual boot, separate harddrives setup.

I would recommend a dual boot setup only if you can install the OS separately so there is no interference. I have posted several links regarding this.

as far as multiple gpu cards, if you can afford it, great, but the fastai library will not use both cards while training at this time. However, there are other DL frameworks which can use all available gpus with no real setup. I don’t experiment while another is training so for fastai duty 1 card is displaying, the other is training.


(Nikhil B ) #85

Thanks! I did look at your dual boot link. Right, I’ve seen people trying to use both cards for training without much success. I’m more interested in the interleaved approach i.e training on one card and using the other card for some interactive/lightweight work.

I saw the thread on Fastai installation on Windows, not sure how mature this is. If I am able to get this to work, a single OS would suffice. Still a noob in this respect, I have questions whether using 1 card has to be used for display purposes, and if windows gives full access to the 2 gpus etc.

While I’ve got you here, did NVME drives help with performance significantly? And how what RAM did you use, wondering if 32GB is a must.


#86

The newest thread on windows that jeremy did recently should be pretty mature. I tried to help a bit with it.
Keep in mind that there are some CPU with integrated graphics. You could always use the integrated graphics for display. My setup does not have integrated graphics.

I think NVME helps alot. I created this thread in the hope people would add to it with their setups. While not apples to apples, the ubuntu (on NVME) setup performed much faster than windows(SSD nand?) with all other hardware being equal. It also shows what a 1060 can do with 16GB of system RAM. Quite a bit slower…


(Nikhil B ) #87

Thanks, I’ll keep this forum posted.


(Carlo Mazzaferro) #88

Starting from scratch and this is the setup I’m looking at: https://pcpartpicker.com/list/ZFkRBb

Anyone care to chime in with suggestions? I suppose the first thing would be switching the 1070 TI for 1080 TI but with the current state of GPU prices that will be hard.

The mobo/cpu combo seems a bit pricey too but I couldn’t find a solid motherboard that has 3+ x16 PCIe lanes.


#89

That’s an expensive mobo! NVME ssd would be a great addition. I would probably go for an AMD cpu and would not care much about the GPU lanes. I haven’t done any multi gpu training just yet but if I were assembling a box myself I would go for a single 1080TI (assuming it was within my budget) and maybe add a weaker GPU sometime down the road for experimenting while my main GPU is working on something. And the secondary one would probably be something very small / cheap.


#90

You won’t see 3x16 with your spec’d 28lane CPU, or even with the 40lane cpu. even if you could physically fit the cards in, and they ran full 16, there will still be a bottleneck between the cpu and the pci controller. you will probably be running 2x8 with that setup anyway.


(Carlo Mazzaferro) #91

Well noticed. I suppose focusing on a single higher-end GPU will be much less of a hassle, and given another 8 free PCIe lanes I could still add another one later (seems that the consensus is that 16 vs 8 lanes doesn’t make much of a difference).

Wondering if getting mobos with 2-3 SLI’s would make any difference?


#92

I would go for a single GPU and increase the available RAM. For a new desktop 16gb can be a restriction for some preprocessing / data manipulation tasks. 32 gb would be more “future proof”.


(Robert Salita) #93

Here’s my DL build. I’m delighted with having a dedicated GPU with 11GB. I use the system as a cloud computer using TeamViewer and ngrok. If you’re in USA, France or UK, consider using shadow.tech instead of buying your own system. Their service is, as of now, glitchy but is arguably a better value than buying a $1500+ system.

8700K, 1080Ti, micro-ATX, silent build

My build uses a Intel 900p Optane SSD but I’m frankly unimpressed with the value. An NVME SSD is all you need.


#94

I just started looking into making my own setup. I’ve been doing everything on a mac.

Is the way to go building a box and then accessing it remotely with a laptop? I’m looking to build something close to the little over $500 range Jeremy mentioned. This is the closest I’ve found: https://towardsdatascience.com/build-a-deep-learning-rig-for-800-4434e21a424f
https://pcpartpicker.com/list/dk6RCy
https://pcpartpicker.com/list/xQJVRG

Not sure about the laptop.

Any recommendations?


(Andrea de Luca) #95

I recommend ECC memory, preferably registered, and at least 40 lanes.

No need to spend a fortune. You can buy all the necessary hardware on ebay. Look for Xeon E5 first or second generation.

Useful resource:

My rig:


#96

Ok thanks for the feedback and resources. What about the os for the box and laptop? Do I want to dual boot windows and Ubuntu on the box? And then does the os for the laptop matter?

It sounds like the laptop doesn’t have to be anything special and I can just access the box remotely through ssh or Jupyter and run everything through the box. Is that right?


(adrian) #97

You dont need windows at all unless you need it for a non dl reason. Once you install ubuntu on both laptop and tower, you can use vnc to talk between them, i use tigervnc as it was easiest for me to get working on my setup, tightvnc is popular accros this forum. On the tower you run vncserver to which runs a service on port 5900:0 from memory, then you run a gui on the laptop to display a screen for the tower.

Or as you mention you can run jupyter without vnc too.

I have found rsync super handy for syncing data between two machines.

Another option for your build is second hand.


#98

Ok cool thank you. Just to clarify, the laptop also needs Ubuntu correct? And is second hand just referring to buying used?


(adrian) #99

You could run a windows vnc client to view your ‘server’ linux desktop, so youve got 3 options there (linux/win/dual boot).

Im running linux on all my machines, pretty much only thing i miss is excel plotting (libre office ok but not as good).

Yes used, i wrote a post here (which i linked to somewhere else hidden away in this forum, please forgive the cross-post) with a bit more info on budget builds at the end of the post.