Clearing GPU Memory - PyTorch


(M. Mansour) #1

I am trying to run the first lesson locally on a machine with GeForce GTX 760 which has 2GB of memory.

After executing this block of code:

arch = resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 2)

The GPU memory jumped from 350MB to 700MB, going on with the tutorial and executing more blocks of code which had a training operation in them caused the memory consumption to go larger reaching the maximum of 2GB after which I got a run time error indicating that there isn’t enough memory.

I know for this particular case, this can be avoided by skipping the previous blocks of code which had a training operation in them and just executing the one where I ran out of memory, but how else could this be solved? I tried executing del learn but that doesn’t seem to free any memory.


(Cedric Chee) #2

Try to restart the Jupyter kernel.

Or, we can free this memory without needing to restart the kernel. See the following thread for more info.


(Sam) #3

If you did del some_object

follow it up with torch.cuda.empty_cache()

This will allow the reusable memory to be freed (You may have read that pytorch reuses memory after a del some _object)

This way you can see what memory is truly avalable