Fine-tuning VGG taking very long

Seems like predict_generator would be written for the explicit purpose of not eating up all your memory. I haven’t looked at the implementation but it should be something like

  • load a batch of test cases into memory on the GPU.
  • make and store predictions on the entire batch (predictions are stored in a relatively small numpy array)
  • remove all references to the batch allowing the python garbage collector to reclaim the used memory.

Remember that when you create an ImageDataGenerator in Keras you determine the batch_size and therefore help control the GPU and system memory used during the call to predict_generator.

predict takes an entire “batch” in the sense that it runs predict on everything you give it all at once (think of batch processing in computer science).

Any chance there’s something wrong with how you installed cudNN?

1 Like