Tip: Clear tensorflow GPU memory

(Jeremy Howard) #1

Inspired by a question from @ostegm, I’ve added an extra line to limit_mem() as follows

def limit_mem():
    cfg = K.tf.ConfigProto()
    cfg.gpu_options.allow_growth = True

You can now as a result call this function at any time to reset your GPU memory, without restarting your kernel. Hope you find this helpful! :slight_smile:

How to check your pytorch / keras is using the GPU?
Clear GPU memory Theano

That was so timely. Thanks @jeremy @ostegm

(Ljubomir Buturovic) #3

I have a question regarding the limit_mem() function. It causes


errors on my desktop (configuration: Titan X 12 GB card/python 2.7/
keras 1.2.2/tensorflow 0.11.0). As a result, I had to stop using it.

Has anyone else observed this behavior?

(Matthew Kleinsmith) #4

I haven’t observed this behavior, but I have Python 3.6 and TensorFlow 1.0.

(Ljubomir Buturovic) #5

Acknowledged, thanks. Might be due to older tensorflow version

(Matthew Kleinsmith) #6

If I run the new limit_mem function from a freshly restarted kernel, TF takes over all my GPU memory. It’s as if it ignores the allow_growth option.

(Valentine Bichkovsky) #7

According to this document https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth :

Note that we do not release memory, since that can lead to even worse memory fragmentation.

What I think happens here: calling K.get_session() in the first line creates a session with default config, which uses all the memory. session.close() doesn’t release it, hence memory consumption stays the same as it was without calling limit_mem.

Just putting this block of code in the beginning of the notebook works for me:

from keras import backend as K
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True