On the AWS P2 instance, after running the code from Lesson 2’s notebook to fit the linear model using
trn_features training data and
trn_labels target data,
lm.fit(trn_features, trn_labels, nb_epoch=3, batch_size=batch_size,
I’m getting an error
ERROR (theano.gof.cmodule): [Errno 12] Cannot allocate memory
top shows that the P2 instance is already using 55.8 GB of memory out of a total of 60 GB available.
Is anyone else experiencing this problem?
unfortunately yes. scary how easy it is to use up 60 gb. i usually try the following:
restart jupyter notebooks.
stopping notebook kernels.
manually garbage collect:
i usually just save all my work and do 1 as its the most effective.
You may want to avoid using get_data() and instead use batches, to avoid using up memory.
If we are loading training data for the model how are we supposed to use batches? All of the examples show get_data unless I am missing something. Are you saying we should use batches when and if possible, but some cases we must use get_data and load everything into memory correct?