How to create limit_mem() function for theano (rather than TF)?

Does anyone know how I can create a limit_mem() function that works for Theano?

def limit_mem():
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True
K.set_session(K.tf.Session(config=cfg))

Thank you!

I haven’t used Theano much, but was under the impression that it doesn’t automatically grab all of the GPU memory the way TensorFlow does (so you probably don’t need that function at all).

I think you’re looking for cnmem? http://deeplearning.net/software/theano/library/config.html. About halfway down the page

Controls the use of CNMeM (a faster CUDA memory allocator). In Theano dev version until 0.8 is released.
The CNMeM library is included in Theano and does not need to be separately installed.
The value represents the start size (either in MB or the fraction of total GPU memory) of the memory pool. If more memory is needed, Theano will try to obtain more, but this can cause memory fragmentation.
0: not enabled.
0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory).
> 1: use this number in megabytes (MB) of memory.
Default: 0 (but should change later)

I give this a shot. Thank you!