Day One: Setting Up and Testing PyTorch with CUDA
In Windows 11
Starting PowerShell as Administrator
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
wsl install
wsl --set-version 2
wsl --list --online #This is just to check what linux are available
wsl --install -d Ubuntu-22.04
Restart Windows and start the terminal with Ubuntu (mine is 22.04).
In WSL Ubuntu Linux Terminal
Installing Mamba Package System with Python
wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
sh Mambaforge-Linux-x86_64.sh
Creating Virtual Environment
mamba create -n ml python=3.10.12
mamba activate ml
Installing PyTorch with CUDA Support
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.6 -c pytorch -c conda-forge
Check if cuda is installed correctly, not necessary step
nvidia-smi
Installing Fastai and Other Libraries
conda install -c conda-forge platformdirs
conda install -c fastchan fastai
conda install jupyterlab
conda install -c conda-forge ipywidgets
Running Jupyter Lab
cd ~/workspace/ml #this is my working directory
jupyter lab
In Jupyter Lab Notebook
Testing CPU and GPU
import time
import torch
import torch.nn
def test(device, tensor_core=False):
in_row, in_f, out_f = 256, 1024, 2048
loop_times = 10000
s = time.time()
tensor = torch.randn(in_row, in_f).to(device)
linear_transform = torch.nn.Linear(in_f, out_f).to(device)
if tensor_core:
tensor = tensor.half()
linear_transform = linear_transform.half()
for _ in range(loop_times):
linear_transform(tensor)
print(f"take time:{time.time()-s}")
print('* Running on CPU:')
test('cpu')
print('* Running on GPU Cuda Core')
test('cuda')
print('* Running on GPU Tensor Core')
test('cuda', tensor_core=True)
My Result
* Running on CPU:
take time:33.22483038902283
* Running on GPU Cuda Core
take time:1.0972840785980225
* Running on GPU Tensor Core
take time:0.32676267623901367