I run my own server Ubuntu 16.02 + Theano 0.9 + Keras 2.0 and Python 3.6.
Hardware is i5 4690K + 16GB RAM + 50GB SWAP + GTX 1080 Ti 11GB VRAM
When running State Farm notebook on full data (ie. not Sample), I get stuck in the “Pre-Computed Data Augmentation + Dropout” section,
when trying to “create a dataset of convolutional features 5x bigger than training set”?
I use the following code:
%time da_conv_feat = conv_model.predict_generator(da_batches, (da_batches.samples*5), workers=3)
I get a “Kernel died” message every time, after maybe 15-20 minutes of computing.
When looking at System Monitor, even when starting the notebook from scratch just to compute that line, I go from 3GB RAM and 1GB SWAP used slowly but surely to 16GB RAM + 50GB SWAP, then notebook’s kernel dies.
The CPU is working at 95% due to Workers=3 and the GTX 1080 TI varies between 15 and 100%, VRAM not exceeeding 45%.
Also it seems outrageously massive as a task since in the part before, “Imagenet Conv features”, I use (batches.samples / batch size) with batch_size = 64.
Since (batches.samples*1) alone runs OOM already.
And here we are trying (batches.samples * 5), a 320 multiplier
Did anyone manage to run that code with full dataset on your own PC server ?
The original full code is here:https://github.com/fastai/courses/blob/master/deeplearning1/nbs/statefarm.ipynb