Both me and my wife are working on Software, and I want to learn AI via fast ai. I learnt online we have players like vast.ai. I initially thought of buying a medium deep learning desktop computer, but now thinking if we could set up one as server at home, me and my wife both could use that powerful rig for our high demand jobs, from our thin client laptops.
Any one has tried similar setup at home? If so please share your setup details both Hw and Sw. I could not find these online.
From HW perspective, its clear which GPU to target for initially (RTX 30xx). What I am confused about, is CPU. It looks like AMD does not support Ubuntu officially for some CPUs. I was thinking of APUs initially like Ryzen 4x50G with integrated graphics for a local server as a test setup, but then I read a lot of complaints about highly unstable Ubuntu on these machines. If ubuntu, we are better off with intel chips?
For SW, I will have to figure out how to host jupyter notebook in the server, and then access from any laptop in same LAN
The more I dig, the more bizarre things are getting.
Considering Performance/cost it was no brainer to choose an AMD Ryzen Zen3 CPU over any Intel CPUs for now (GPU being a RTX 30XX)
I learn AMD APUs 4x50G and X300/A300 chipset officially support Win 10. Check 1, 2. Do not know what issues this setup (Ubuntu + AMD APU) would bring home.
Then I find, the ML stacks may be discriminative on CPU. For example, Anaconda using MKL optimized for Intel being one of many cases, I do not know how much of a rabbit hole is awaiting, going with AMD CPUs.
I almost made up my mind for AMD CPUs for any of my rigs but I think I slowly start to see why they are not widespread despite strong VFM. How many of you are successfully using your own rig made up of a AMD CPU + Nvidia GPUs + Ubuntu? And AMD APU + Ubuntu for any of your local server/mini pc solutions?
Hi … i had to make the same decision and here are my 2 cents:
1-2 GPUs and limited budget -> AMD Ryzen Zen3
2+ GPUs and limited budget -> Intel i9
2+ GPUs no budget limit -> AMD Threadripper
You are totally right … that’s a rabbit hole (could add motherboard, RAM channels, PICe lanes, Temp sensor support on linux, etc …). But generally the CPU will not be the performance bottleneck. The only hard constraint imho for Ryzen Zen3 are the mainboards and PCIe lanes which limit them to 2 GPUs in my opinion.
Setting up a home server for deep learning and shared use is a great idea. Many have done similar setups for efficient computing. Here’s a sample setup:
Hardware:
CPU: AMD Ryzen 9 or Threadripper (for multi-threading).
GPU: NVIDIA RTX 3090 or 4090 for strong AI performance.
RAM: At least 64GB, ideally 128GB for smoother multi-user tasks.
Storage: NVMe SSD (1TB or more) for fast access to datasets.
Networking: High-speed LAN (Ethernet) for stable connections.
Software:
OS: Ubuntu Linux is popular for AI, with great support for GPU drivers and deep learning libraries.
Environment: Set up Docker or Conda for isolated environments.
Remote Access: Use SSH, JupyterHub, or Remote Desktop Protocol (RDP) for accessing from your laptops.
This way, you can share resources and run AI tasks effectively. Hope this helps, and feel free to ask more!