CUDA 3.0 with fastai

I happen to have a laptop with an older nvidia card with cuda 3.0 capabilities.
I know it doesn’t work with current versions of pytorch, but is there any way I could get a hold of older pytorch version or recompile it so to make it work with current fastai?

Hmm, CUDA 3.0 support was dropped in PyTorch 0.3.1.

Today PyTorch is at 2.0.0 and at 1.13.1 on the v1 branch and fastai 2.7.12 follows it closely.

You can get older versions of PyTorch here:

But those very old versions typically need an older version of fastai too, so I think you can’t use them with the current fastai version.

I think the best idea to use the current version of fastai with current version of PyTorch on that laptop but with the CPU version if the cpu not that weak.
On the PyTorch site there is an install matrix with different OS-es and compute platforms, so you can pick CPU, instead of CUDA.

Another idea to not use these locally on your laptop, but on the web - I know it’s a trade-off, but it can worth it.
There are sites that you can use relatively freely to practice and try out things like Google Colab or Kaggle and they have newer GPU-s.

1 Like

Of course you can try these different ways and see which one you like best.
The main thing is not to give up, because in the end it will be really good :slight_smile:


You have another possibility for your local adventures :).
If your laptop’s GPU with CUDA 3.0, I guess its a GTX 6xx or 7xx card.
But even these cards support DirectX 12 :slight_smile:
And here comes DirectML :slight_smile:
DirectML is compatible with NVIDIA Kepler (GTX 600 series) and above, so you should be fine.
There are TensorFlow and PyTorch versions of it, of course you need the PyTorch version.
The latest torch-directml 0.1.13+ supports PyTorch 1.13+, it also looks fine.

Here is how to install and use it properly.
So you have to install pytorch cpuonly + torch-directml based on this, but then you can get a torch_directml.device() instead of the usual cuda device for your model and tensors so it will use your GPU via DirectX 12 under the hood :slight_smile:

Cool right? :slight_smile:

1 Like

Unfortunately windows only.
Well I guess I am out of luck for local installations, thanks anyway.

Even if you did you manage, the next challenges would be limited GPU memory for training and inference, along with slow speeds.

Online services win this round :slight_smile: .

1 Like