For those who run their own AI box, or want to

Thank you @slowtalk for explaining how to fix the FileUpload widget code

I did the following to run the fastbook jupyter notebooks on WSL2 Debian bookworm, using pytorch-cuda to use my GPU Nvidia 3080:

mkdir -p ~/.local/opt/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/.local/opt/miniconda3/miniconda.sh
bash ~/.local/opt/miniconda3/miniconda.sh -b -u -p ~/.local/opt/miniconda3
. ~/.local/opt/miniconda3/bin/activate

git clone https://github.com/fastai/fastbook ~/workspace/fastbook
cd ~/workspace/fastbook

conda env create -f environment.yml -n fastbook
conda activate fastbook
conda install -c conda-forge -c fastchan -c pytorch -c nvidia notebook=6.5.6 nb_conda_kernels jupyter_contrib_nbextensions fastai nbdev fastbook pytorch torchvision torchaudio pytorch-cuda=12.1 graphviz python-graphviz

jupyter trust *.ipynb clean/*.ipynb
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10

After changing the code that use the ipywidgets FileUpload widget, according to @slowtalk post I linked, it works.

Good catch man I have an entry in the forum under “Download_images() issue” but got no feedback and then I came across your entry. I was totally stuck so your insight has saved the day

Regards Jon

Hello,

If you use NixOS, I assume you have configured your system to use the NVIDIA drivers via the hardwares module.

Actually setting up Jupyter Notebook with fastai is super simple with Nix.

You only need a default.nix in your working directory.

{ pkgs ? import <nixpkgs> { } }:
pkgs.mkShell {
  buildInputs = with pkgs; [
    jupyter-all
    (with jupyter-all.pkgs; [
      matplotlib
      tqdm
      torch
      typing-extensions
      torchvision
      fastai
    ])
  ];
  JUPYTER_CONFIG_DIR = "./config";
}

However, it compiles it from source and it might take time.

If you want to use the patched binaries which already use CUDA, you can modify the default.nix file. Note the override is currently needed to fix triton-bin.

{ pkgs ? import <nixpkgs> {
  config = { allowUnfree = true; };
  overlays = [
    (final: prev: {
      python312 = prev.python312.override {
        packageOverrides = final: prevPy: {
          triton-bin = prevPy.triton-bin.overridePythonAttrs (oldAttrs: {
            postFixup = ''
              chmod +x "$out/${prev.python312.sitePackages}/triton/backends/nvidia/bin/ptxas"
              substituteInPlace $out/${prev.python312.sitePackages}/triton/backends/nvidia/driver.py \
                --replace \
                  'return [libdevice_dir, *libcuda_dirs()]' \
                  'return [libdevice_dir, "${prev.addDriverRunpath.driverLink}/lib", "${prev.cudaPackages.cuda_cudart}/lib/stubs/"]'
            '';
          });
        };
      };
      python312Packages = final.python312.pkgs;
    })
  ];
} }:

pkgs.mkShell {
  buildInputs = with pkgs; [
    jupyter-all
    (with jupyter-all.pkgs; [
      matplotlib
      tqdm
      torch-bin
      typing-extensions
      torchvision-bin
      fastai
    ])
  ];
  shellHook = ''
    export JUPYTER_CONFIG_DIR=$(pwd)/config
    export FASTAI_HOME=$(pwd)/fastai 
  '';
}

Here is the benchmark using a 4060 laptop.

image

Hi I was trying to run the first lesson of the book. (fastbook/01_intro.ipynb at master · fastai/fastbook · GitHub) In my relatively old computer.
I’t has a NVIDIA GeForce GTX 1050 (2gb)

I’m getting a Out of memory error. Is it possible that this happens because I misconfigured something?. Or I do need a larger/better Graphic card for this.

OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacity of 1.95 GiB of which 7.00 MiB is free. Including non-PyTorch memory, this process has 1.93 GiB memory in use. Of the allocated memory 1.83 GiB is allocated by PyTorch, and 53.69 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

OOM_error

Is this normal when use Kaggle:

Is this normal when use colab?