Tip: Clear tensorflow GPU memory

Inspired by a question from @ostegm, I’ve added an extra line to limit_mem() as follows

def limit_mem():
    K.get_session().close()
    cfg = K.tf.ConfigProto()
    cfg.gpu_options.allow_growth = True
    K.set_session(K.tf.Session(config=cfg))

You can now as a result call this function at any time to reset your GPU memory, without restarting your kernel. Hope you find this helpful! :slight_smile:

13 Likes

That was so timely. Thanks @jeremy @ostegm

I have a question regarding the limit_mem() function. It causes

CUDA_ERROR_OUT_OF_MEMORY

errors on my desktop (configuration: Titan X 12 GB card/python 2.7/
keras 1.2.2/tensorflow 0.11.0). As a result, I had to stop using it.

Has anyone else observed this behavior?

I haven’t observed this behavior, but I have Python 3.6 and TensorFlow 1.0.

Acknowledged, thanks. Might be due to older tensorflow version

If I run the new limit_mem function from a freshly restarted kernel, TF takes over all my GPU memory. It’s as if it ignores the allow_growth option.

According to this document https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth :

Note that we do not release memory, since that can lead to even worse memory fragmentation.

What I think happens here: calling K.get_session() in the first line creates a session with default config, which uses all the memory. session.close() doesn’t release it, hence memory consumption stays the same as it was without calling limit_mem.

Just putting this block of code in the beginning of the notebook works for me:

from keras import backend as K
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True
K.set_session(K.tf.Session(config=cfg))