The the link to access the recommended options for GPU servers redirect to the courses website. Where can I find the list?
Hi
@kritirakheja I see you are reading the book Deep Learning for coders. It depends to be honest. They are a couple of discussions that have been had about this. For example: Original post
Building local
—-
Building local GPU server - #8 by josca42 and Build a Pro Deep Learning Workstation... for Half the Price
Cloud
—
You have a couple of options depending on how much you are willing to spend. I recommend the following:
Colab
Runpod
Vast
lambda
Huggingface
I personally use google colab since most the classes we are using notebooks and it gives an easy interface to switch runtime from GPU, TPU with an easy way to get more compute credits. Runpod and vast can be good for running long running jobs and mimics some VMs you could get for major cloud providers and huggingface especially jobs could be useful in running some quick jobs, inference endpoints and spaces to practice making demos makes it a good suite for tooling.
All the best.
Thank you so much!
Hello,
a data science student or professional, Hugging Face is invaluable because it is the central Hub for millions of state-of-the-art models and datasets, particularly in NLP, computer vision, and audio, allowing you to quickly find, download, and fine-tune models using their ubiquitous transformers library. Beyond the open-source tools, Hugging Face offers two key compute options: Spaces and Inference Endpoints. Spaces provide an easy way to host interactive demos (often with Gradio or Streamlit) directly from your code, similar to a perpetual, publicly-shareable notebook for showcasing results. For production, e-zpassde Inference Endpoints offer a fully managed, scalable solution for hosting models as a dedicated API on various CPU and GPU hardware with built-in features like auto-scaling and security, which fills the gap between the collaborative environment of Colab and the raw VM power of RunPod .
Best Regard,
capaniv
