Fastai library support for RTX 5000 series

Hi! I just started out with the course and just got to the NLP lesson (I’m loving the course so far!). Training the language model was taking a very long time in google colab so I shifted to trying to run it locally on my 5070ti. This went much faster until I hit the learn.fit_one_cycle(1, 2e-2) line in the textbook which is where I ran into the following error (top cutoff to save space, I can give the full stack trace if it helps):

File ~/.local/lib/python3.12/site-packages/fastai/text/models/awdlstm.py:148, in AWD_LSTM._one_hidden(self, l)
    146 "Return one hidden state"
    147 nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
--> 148 return (one_param(self).new_zeros(self.n_dir, self.bs, nh), one_param(self).new_zeros(self.n_dir, self.bs, nh))

RuntimeError: Exception occured in `TrainEvalCallback` when calling event `before_fit`:
	CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

At first I thought this was a pytorch version or a CUDA driver issue, but those are both up to date and running some quick pytorch tests by running:

    x = torch.randn(1_000, 1_000, device="cuda")
    y = torch.randn(1_000, 1_000, device="cuda")
    z = torch.matmul(x, y)
    torch.cuda.synchronize()

worked and seemed to show pytorch working with CUDA (assuming that simple test is accurate). After seemingly ruling out pytorch I found a thread in the pytorch forums with the same error indicating that 3rd party libraries using pytorch may need to add support for newer kernal versions.

Is there any configuration that I’m missing that I’d need to do to get fastai working with my GPU? Does the fastai library support the 5000 series yet generally? Thanks for the help! I love the work you all have put in making the fastai library so accessible!

Edit: I realized I should include system info, whoops. This is in WSL2 in Windows 11 running Ubuntu 24.04.2 LTS

1 Like

Blackwell cards (B100 & B200, RTX 5000 series, etc) require a minimum of PyTorch 2.7 with Cuda 12.8 : https://pytorch.org/blog/pytorch-2-7.

As PyTorch 2.7 is a recent release, fastai hasn’t been updated to support it yet. But you can try ignoring the dependency version and install PyTorch 2.7 with Cuda 12.8 and see if that resolves the issue.

2 Likes