I am trying to run the first lesson locally on a machine with GeForce GTX 760 which has 2GB of memory.
After executing this block of code:
arch = resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 2)
The GPU memory jumped from 350MB to 700MB, going on with the tutorial and executing more blocks of code which had a training operation in them caused the memory consumption to go larger reaching the maximum of 2GB after which I got a run time error indicating that there isn’t enough memory.
I know for this particular case, this can be avoided by skipping the previous blocks of code which had a training operation in them and just executing the one where I ran out of memory, but how else could this be solved? I tried executing del learn
but that doesn’t seem to free any memory.