In mnist notebook, @jeremy used an ensemble to train the model.
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
return model
models = [fit_model() for i in range(6)]
My desktop configuration is:
RAM 16 GB
Processor :- i7 Quad core.
GPU:- 8GB
fit_model() function will be executed in GPU but my confusion is with the last line of the above code.
- Will the last line i.e a list() will use my Desktop RAM(16 gb ) to store the values during the for loop?
- Is it going to lead me to memory exhaustion in my desktop RAM(16 gb ) ?