Training previous convolution layers of Inception V3 increases loss

I am trying to do multi-label image classification for 1000 classes using Inception V3 in Keras. In my first version of model (M1), I just retrained penultimate layer and got validation loss of 0.0152. But when I train more layers (penultimate and convolution), say model M2, validation loss settles to 0.0180. I expected validation loss to decrease when I train more layers.

To speed up training, I have added penultimate layer weights from M1 to M2. I have tried both optimizer SGD (with lr =0.0001) and “adam”. I am stuck here. Can someone help me or point out flaw in my code?

base_model = applications.InceptionV3(weights = "imagenet", include_top=False)

#Adding FC layers to base Inception model
x = base_model.output
x = AveragePooling2D((8,8), strides=(8,8))(x)
x = Flatten()(x)
x = Dense(1024, activation="relu")(x)
x = Dropout(0.5)(x)
out = Dense(1000, activation="sigmoid")(x)
model = Model(inputs=base_model.input, outputs=out)

#loading weights of M1 i.e. only penultimate layer trained model and adding to our new model M2 before training
model_temp = load_model("model_ppath_M1")
mwt = model_temp.get_weights()

#defining layers to be trained
for layer in model.layers[:279]:
    layer.trainable = False
for layer in model.layers[279:]:
    layer.trainable = True

#adding pre trained model M1 weights to this model to speed up training
model.layers[313].set_weights([mwt[0], mwt[1]])  
model.layers[315].set_weights([mwt[2], mwt[3]]) 

model.compile(optimizer="adam",  metrics=["accuracy"], loss='binary_crossentropy')

model.fit_generator("added appropriate data generator function here")