In statefarm, why are training batches NOT created with shuffle=False?

I fell into this trap as well.

Let me just state this explicitly in case someone is searching the forums: if you’re working through the statefarm.ipynb notebook and you are getting accuracies that approximate chance when building a model that uses pre-trained vgg layers up through the last Convolution2D layer as inputs this is very likely your problem.

This issue is also discussed (and answered) here:

Very happy I found this thread. I won’t make this mistake with a DirectoryIterator when tying two models together again.