Precompute vs. Freezing

How can we have precompute=True and use data augmentation at the same time?

In the video recording of the lecture at 49:41, Jeremy mentions “Since we’ve already pre-computed the activations for our input images, that means that data augmentation doesn’t work”

I think one fundamental difference between a pre-trained network and a pre-computed outputs, is that:

  • In a pre-trained network the layer weights that have been pre-calculated, but this has nothing to do with your input data with the training/test/validation images of dogs and cats
  • Pre-computed activations have passed your input data (dogs and cat images) through the network and have cached the results.

It seems like pre-computing is a second level caching mechanism to avoid repeated feedforward passes on the input data, since it will end up producing the same results each time. The danger is that if you make some tweaks (like data augmentation), and you forget to re-run the pre-compute phase, then your changes won’t be reflected.

12 Likes