@Buzz
I’m using .n
and get the same accuracy.
But i had the same issues! I only save the weights of the Epoch with the highest val_acc
load it afterwards and use this model to test the accuracy with:
score = model.evaluate(x_valid,y_valid, batch_size=batch_size)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
i had different results here then while i trained the set. However, the issue was that i loaded my x, y_valid
with my own data loader. I couldn’t see any difference between the loaded data, but it appears that when i loaded x, y_valid
exactly the same way i’ve loaded my train and valid batches the results matched perfectly. Before that i was messing around with n
and samples
as well.
Here is how i load my images:
train_datagen = ImageDataGenerator(
rescale = 1./255,
width_shift_range=0.08,
height_shift_range=0.05,
horizontal_flip=True,
zoom_range=0.1,
fill_mode='constant')
test_datagen = ImageDataGenerator(
rescale=1./255)
train_generator = train_datagen.flow_from_directory(
path+'train',
target_size=(299, 299),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
path+'valid',
target_size=(299, 299),
batch_size=64,
class_mode='categorical')
steps_per_epoch = int(np.ceil(train_generator.n/batch_size))
validation_steps = int(np.ceil(validation_generator.n/batch_size))
and then
#--- Lädt die Validation Daten als arrays ein.
gen_val = ImageDataGenerator(
rescale=1./255)
gen = gen_val.flow_from_directory(
path+'valid',
target_size=(299, 299),
batch_size=1,
shuffle=False)
x_valid = np.concatenate([gen.next()[0] for i in range(gen.n)])
y_valid = np.concatenate([gen.next()[1] for i in range(gen.n)])
I think that may is your problem. Otherwise i’m interested in the reason as well.