I am trying to run notebook from the second lesson on the sample data. I have GTX 960M with 1 Gb of memory and 8 Gb main RAM.
This code gives me
from vgg16 import Vgg16
vgg = Vgg16()
model = vgg.model
Error allocating 411041792 bytes of device memory (out of memory).
Is it about GPU or main memory?
I couldn’t find
nvidia-smi command on macOS though I have CUDA installed and working. What is the way to check GPU memory consumption on macOS?
And the last question will it be possible to do the course exercises on sample data with this machine? I am ok with running p2 instance to fit the whole data but currently I spend a lot of time just to figuring out how smth works in Python.
Absolutely - we show how to do that in the first couple of lessons. It’s a great approach.
I’m not familiar with Mac GPU stuff, but have you tried https://github.com/phvu/cuda-smi ?
Reduce your batch size considerably to like number 2. And if you have any external monitors unplug them as they use up GPU memory.
Here are the results:
Device 0 [PCIe 0000:01:00.0]: GeForce GT 650M (CC 3.0): 180 of 1023 MiB Used
When the notebook is loaded and keras with theano imported
Device 0 [PCIe 0000:01:00.0]: GeForce GT 650M (CC 3.0): 897 of 1023 MiB Used
Error with batch_size=2
MemoryError: ('Error allocating 411041792 bytes of device memory (out of memory).'
I just have no 400 more Mb of gpu mem…
@lexsys Try restarting the kernel (if you haven’t already), which can be helpful when you’re out of GPU memory.
Learned this the hard way.
The batch_size modification will not take effect until you restart the kernel in Jupyter. I’m running the notebooks locally on both Windows and Mac machines and was able to run things fine with a batch size = 8.
Ouch - only 1GB of graphics RAM. Not sure how much you’ll be able to do with that, I’m afraid, since VGG itself needs more than that to run.
try to put" cnmem = 0.85 "in your .theanorc file