Personal Deep Learning PC/Box Recommendations

(nima) #1

Hey folks,

I wanted to start this thread so folks can submit their configurations.

Preferably, it would be great if folks could include their PcPartPicker links here to make it easier for others to use their configurations.

Think of the content you create here as the beginnings of the medium post you might write up later about it :slight_smile:

0 Likes

(Xinxin) #2

Can folks also share how much time it takes to build and configure your own box? hardware debugging is very scary to me, I’d like to have some realistic expectations, i.e. time expectations on building your own box for dummies.

0 Likes

(sravya8) #3

Central computers charges 80$ for building the machine. Its money well spent if you do not necessarily love hardware assembly and debugging :slight_smile:

4 Likes

(David Gutman) #4

The hardest part is finding the right parts to get. After that, it’s basically an expensive set of legos. (Just built one for the first time this weekend.)

Hardest part of the assembly is adding the CPU cooler if you got a custom fan + heat sink, but for deep learning you don’t need one and can use the stock Intel cooler (which is an easy installation).

It is definitely possible to set up in a day as a novice, and I’m sure an expert could put one together in under a half hour.

1 Like

(Xinxin) #5

@sravya8 That’s a big relief to know, I’ll definitely look it up.

@davecg congrats on getting your own box built. What is your process to ensure getting the right parts instead of the “wrong parts”? do you mind sharing your configurations?

0 Likes

(Jeremy Howard (Admin)) #6

I’d suggest reading the thread on the main forum (linked from the ppt this lesson)

0 Likes

(Brendan Fortuner) #7

Also heads up about the new graphics card Nvidia just released.

1 Like

(Mariya) #9

Great suggestion and will definitely keep this in mind.

0 Likes

(Mariya) #10

Has anyone successfully used 2 GPUs in one box? Based on a preliminary Google search, seems like parallelizing your programs across multiple GPUs is a bit of a pain.

I successfully cajoled a few professor friends of mine to apply for this nVidia academic grant, for which they are offering free Titan X Pascal GPUs if you’re accepted, which means I may end up having more than one on hand.

1 Like

(David Gutman) #11

Processing data in paralllel actually isn’t that hard, you just end up using a smaller batch size on each GPU and concatenating the result on the CPU - kuza55 created a script to do it automatically on github.

Model parallelization is a bit trickier but also possible. You can assign different layers of your model to different devices relatively easily in TensorFlow (just use with tf.device('/gpu:%d' % n): where n is your gpu # (e.g. 0). Haven’t been able to get much of a speed improvement with this yet.

1 Like

(Jeremy Howard (Admin)) #12

As mentioned in class and the ppt, 2 GPUs are very helpful for running one experiment on one GPU and letting you continue to work on a notebook on the other GPU. So if you can afford it (or you get the grant!) it’s a great idea.

2 Likes

(Mariya) #13

Let’s hope we succeed on multiple grant fronts then!

0 Likes

(Mariya) #14

Sounds like you know what you’re doing! When it comes time for me to set up my box, hope you don’t mind if I ping you with a question or two if I get stuck.

0 Likes

(Jeremy Howard (Admin)) #15

It might be better if we all migrated this discussion to the main forum at Making your own server . That way we’ll be able to get help from more people - there’s nothing really part2-specific about this thread…

0 Likes