CUDA out of memory

I am encountering an error when trying to use lr_find() directly from lesson3-planet. It tells me that I have run out of memory while there is still memory available.

. I have tried restarting the kernel. tried to use torch.cuda.empty_cache() and gc.collect() to clear the GPU, but I’m still running into problems. I’m getting the following when running the lesson3-planet notebook directly:

Try to reduce the batch size.

For the leson3-planet notebook, where would I specify the batch size? Would it be when I instantiate cnn_learner? I’m not exactly sure where to put it and I don’t see it specified in any of the previous lines of code.

When you create the databunch, here is an example:

data = ImageDataBunch.from_folder(path,
                                  valid_pct = 0.2,
                                  ds_tfms = get_transforms(),
                                  size = resolution,
                                  bs = batchSize)
2 Likes

Thanks! This worked. It was a little disguised because the notebook specifically says we then need to use ImageList (and not ImageDataBunch), but they use databunch() to which I fed bs=4 as an argument.

3 Likes

Could you elaborate on that. Did you replace ImageList by ImageDataBunch or what exactly?

Hey @BelAir,

I was stuck on the same thing.

I imagine he added bs = 4 in the datablock (see below)

data = (src.transform(tfms, size=256)
.databunch(bs = 4).normalize(imagenet_stats))

2 Likes