You only have 1 nvidia GPU, so you can only set_device(0).
I have come across this gpu based pc configuration, which costing around 100,000 Indian rupees.
Is this good enough for our fastai course and running kaggle data sets.
If you built it with a custom box, can you successfully make it so the water wouldn’t drip on any other components if a junction leaks? Can the water attachments always face down so that it doesn’t drip on the connected component? I think most cases I’ve seen have the water filling from top, so this is a great idea, as I would be terrified of ruining hardware from a leak
Due to memory consumption error (cuda out of memory) on a model I’m trying, I had to upgrade from a 960 today. I was able to get a 1080 which has double the memory at “near” msrp from bestbuy (https://www.bestbuy.com/site/nvidia-founders-edition-geforce-gtx-1080-8gb-gddr5x-pci-express-3-0-graphics-card-black/5330600.p?skuId=5330600). Also, it looks like right now nvidia has titain xp’s in stock at msrp of $1200, which is around what some places are trying to sell 1080ti’s for.
This links might be helpful for others dealing with gpu inflation costs:
hope that helps others
Hi, What we’ve done till lesson 10 of part 2, you should be ok with a 1050ti if you don’t mind training models overnight. You can also get a desktop or a laptop wtih a 1060 with a budget of INR 100k
I think older posts on this thread have all the blog posts and insights on config that you’ll need
Check out these links if you’re looking for parts that go well together and are buying in India
thanks for the reply,
another question, if I buy a low-end pc for now, can I upgrade/add the latest RAM, GPU and SSD memory, in future to the CPU cabinet.
what is the difference in the role of system RAM and GPU memory for training our models?
You won’t have a problem upgrading the ram/SSD but you’ll have to change the power supply to handle a second or upgraded GPU. Maybe even get a bigger ups (I made the ups mistake)
The GPU memory is where the data (think of a mini batch) and models will reside for the models to train using cuda. This can be loaded either through your system ram from a pandas dataframe or the hard disk for lazy loading of images from a folder using fastai’s ImageClassifierData. More GPU memory will let you work on larger mini batches and train more epochs
I am planning to go with 96k costing pc from above link that u have posted. So I should get a bigger ups power supply, so that I can go with future GPU upgrades with out any problems.
Is there any thing else that should know before buying this.
Thank you for all the help.
Hi, you can contact the company regarding the power supply. They’ll know how much you need and will change it for you. I’d say stick to the default if you’re not sure about getting another gpu (prices don’t increase linearly)
The writer of this blog post got a 750W for 2x1080ti
Regarding this company, make sure they give you the serial number of all parts in the bill. That’s how you get warranty on the gpu and other parts
I just put together my DL box, thanks to this forum. Here is my blogpost sharing my experience.
Thanks, this looks like a great build!
I’m curious if you’re fully utilizing the features of your MSI Godlike Motherboard? The MSI M5 gaming model looks good for a single video card, I’m considering the tradeoff between cost and future-proofing.
For those of you comfortable waiting on a backordered product, I see 1080ti’s on B&H at pre-crypto-craze prices (750 usd).
Use a 1000W PS for two GTX 1080 GPUs.
I hate cross-posters too, but since this topic is all about building you own machine, learn from my mistake. I built a box with two GTX-1080s that worked fine if you ran each separately with a 750W Platinum PS. But when I ran with pytorch nn.Dataparallel to use both at the same time, my machine crashed and tracking down why it crashed was no fun. Now I know better and you do too, use a 1000W PS if you have two GPUs in your box to avoid these problems.
With command like ‘sudo nvidia-smi -pl 200’, you can limit the power consumption of GPU.
(Actually performance doesn’t drop that much with power-limitation.)
I’m not sure whether this can actually fix the issue, but I guess it’s worth trying.
Would suggest buying components in the US and bringing to India. Its much cheaper that way. I was able to build my computer rig for 1.1lac (which would have been 1.6lac had I purchased in India)
Write up on the components and prices - https://medium.com/@Stormblessed/building-my-own-deep-learning-rig-for-under-1-lac-in-india-4ade685b8c56
The threadripper pcie errors in Linux are fixed by adding pcie_aspm=off to grub. See here PCIe errors
My threadripper system has been 24/7 stable for months, and I am completely satisfied. The platform simply offers a lot more for the money (cores, pcie lanes). The extra CPU cores are not necessary for DL, however they are very useful for other applications (compiling code, simulations, etc…)
With that said, if this is your first build I think you are better off with Intel. As a relatively new platform threadripper requires more tweaks and planning:
- Zen v1 is pickier with memory than intel, so I would only buy RAM from the QVL list
- If installing Ubuntu 16.04 LTS, you should upgrade to a more modern kernel (4.16) as there are many new drivers and tweaks for the zen platform
- The TR4 socket can be a PITA to physically Install
Ryzen v2 was just released and supposedly deals with a lot of the memory issues. Threadripper v2 will be released in August, and is expected to have similar stability and~10% performance improvements.
After getting pretty distracted by a ‘new’ build and looking into watercooling, I’ve written a post here. Soon I’ll be done with the Xeon build and will be able to focus again on some DL work.