My understanding was that the purpose of fitting was to train the model.
i.e. fit(self, X, y, batch_sizes … )
Where X is your input training data
y is your training data but with labels.
In lesson 2, I can’t understand how batches is your input data X
and val_batches are your labels, y.
Can someone help me understand this please.
The VGG class is actually a wrapper around the keras model.
You are looking at the implementation to model.fit
The VGG.fit method actually uses the model.fit_generator which expects a generator instead, without a requirement for X, Y labels.
that’s because the generator itself knows the labels - the notebook shows this when calling next on the batch which returns a tuple of (img,label)
fit_generator(self, generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, initial_epoch=0)
Just to add - if you were to use model.fit supplying X and Y, that would not be efficient for large data sets.
Python generators provide an efficient way of iterating over large data “lazily”.
I’ve only recently studied python generators and found this useful:
Thanks for taking the time to respond, Bilal. Appreciate the link to the python generators too.