Getting started - Local System

After I have set up mamba, fastai, fastbook on my local system (WSL on Windows) that has 'NVIDIA GeForce MX450 and I run

from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224))

learn = vision_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

I get the error OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 2.00 GiB total capacity; 1.61 GiB already allocated; 0 bytes free; 1.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

In task manager, performance tab in windows i can see that I have GPU 0 and GPU 1. The GPU 1 is NVIDIA GeForce MX450 and has 10 GB memory while the error states that it is not able to allocate 26 MiB.

Please help me solve the issue.

Sure that the MX 450 has 10 GB? It says 2GB in your error and this is according to the spec I found.

Try reducing the batch size, e.g. test setting bs=10

You are right !
Do you recommend using google colab ?

I tried with batch size = 10 and it seems to be working now. How do you find the best batch size?

Try larger sizes until it crashes, then go back to the last working bs

1 Like