i struggle since two days with my performance after i adjusted my dropout. I used the Function of class3 to perform the adjustment:
def get_fc_model(prev_p, new_p, conv_layers): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(new_p), Dense(4096, activation='relu'), Dropout(new_p), Dense(2, activation='softmax') ]) for l in model.layers: if type(l)==Dense: l.set_weights(adj_wgts(l, prev_p, new_p)) return model def adj_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()]
After that i proceeded as followed:
model = vgg.model layers = model.layers last_conv_idx = [index for index, layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_s3 = get_fc_model(0.5, 0.65, conv_layers) conv_model.add(fc_s3)
Since my own set could need some adjustment of the last convLayer i made this one trainable as well.
for layer in ft_s3.layers[:-2]: layer.trainable = False
i then compiled my model with a slow lr and started training.
For better comparison i used the cats and dogs as well and start with a loss of 7. after around 80 epocs my loss is at around 1.
Is there something i’m doing wrong?