Im having alot of problems trying to finetune the resnet50 model. Here is my code:

```
resn_model = resnet50.ResNet50(include_top=True)
resn_model = Model(resn_model.input,resn_model.layers[-2].output)
batch_size = 64
target_size=(224,224)
train_batches = gen.flow_from_directory(train_path,batch_size=batch_size,class_mode='categorical',target_size=target_size,shuffle=False)
valid_batches = gen.flow_from_directory(valid_path,batch_size=batch_size,class_mode='categorical',target_size=target_size,shuffle=False)
train_labels = onehot_encode(train_batches.classes)
valid_labels = onehot_encode(valid_batches.classes)
train_features = resn_model.predict_generator(train_batches,train_batches.nb_sample)
valid_features = resn_model.predict_generator(valid_batches,valid_batches.nb_sample)
top_model = Sequential()
top_model.add(Dense(2,activation='softmax',input_shape=(2048,)))
top_model.compile(optimizer=Adam(),loss='categorical_crossentropy',metrics=['accuracy'])
top_model.optimizer.lr = 0.0001
nb_epoch = 10
top_model.fit(train_features,train_labels,batch_size=batch_size,validation_data=(valid_features,valid_labels),nb_epoch=nb_epoch)
full_model = Model(input=resn_model.input,output=top_model(resn_model.output))
full_model.compile(optimizer=Adam(),loss='categorical_crossentropy',metrics=['accuracy'])
full_model.evaluate_generator(valid_batches,val_samples=valid_batches.nb_sample)
#Outputs: [0.14526151073591842, 0.94553376906318087]
for layer in full_model.layers:
layer.trainable=False
full_model.layers[-1].trainable = True
full_model.summary()
#Outputs: Total params: 23,591,810
# Trainable params: 4,098
# Non-trainable params: 23,587,712
#(Only the last layer is trainable)
nb_epoch = 1
full_model.fit_generator(train_batches,train_batches.nb_sample,nb_epoch=nb_epoch,
validation_data=valid_batches,nb_val_samples=valid_batches.nb_sample)
full_model.evaluate_generator(valid_batches,val_samples=valid_batches.nb_sample)
#Outputs: [2.9434102562373843, 0.63180827886710245]
```

Am I not keeping the weights from `resn_model`

and `top_model`

when creating `full_model`

? Whats going on here? Setting the learning rate to 0 before doing `full_model.fit_generator()`

does the same thing. My accuracy is closer to 0,5.

I also tried loading the weights from `resn_model`

and `top_model`

into `full_model`

:

```
resn_layers = resn_model.layers
top_layers = top_model.layers
full_layers = full_model.layers
for l1,l2 in zip(full_model.layers[:len(resn_layers)],resn_layers):
l1.set_weights(l2.get_weights())
for l1,l2 in zip(full_model.layers[len(resn_layers):],top_layers):
l1.set_weights(l2.get_weights())
```

This changes nothing. I am really lost here and have been googling this issue for hours without finding a solution. Can anyone here please explain what I am doing wrong?

EDIT:

When training `top_model`

I use a batch_size of 4

```
batch_size = 4
target_size=(224,224)
gen = ImageDataGenerator()
train_batches = gen.flow_from_directory(train_path,batch_size=batch_size,class_mode='categorical',target_size=target_size,shuffle=True)
valid_batches = gen.flow_from_directory(valid_path,batch_size=batch_size,class_mode='categorical',target_size=target_size,shuffle=False)
```

If I train on only `batch_size`

amount of images per epoch I get:

```
nb_epoch = 3
full_model.fit_generator(train_batches,batch_size,nb_epoch=nb_epoch,
validation_data=valid_batches,nb_val_samples=valid_batches.nb_sample)
#Output:
#Epoch 1/3
#4/4 [==============================] - 11s - loss: 1.1729 - acc: 0.5000 - val_loss: 0.1455 - val_acc: 0.9521
#Epoch 2/3
#4/4 [==============================] - 11s - loss: 8.5129 - acc: 0.2500 - val_loss: 0.1482 - val_acc: 0.9477
#Epoch 3/3
#4/4 [==============================] - 11s - loss: 4.1496 - acc: 0.5000 - val_loss: 0.1514 - val_acc: 0.9477
```

And if I train on the full training set:

```
nb_epoch = 3
full_model.fit_generator(train_batches,train_batches.nb_sample,nb_epoch=nb_epoch,
validation_data=valid_batches,nb_val_samples=valid_batches.nb_sample)
#Output:
#Epoch 1/3
#1836/1836 [==============================] - 96s - loss: 3.0078 - acc: 0.4929 - val_loss: 2.9002 - val_acc: 0.5381
#Epoch 2/3
#1836/1836 [==============================] - 95s - loss: 3.0078 - acc: 0.4929 - val_loss: 3.0234 - val_acc: 0.5272
#Epoch 3/3
#1836/1836 [==============================] - 95s - loss: 3.0078 - acc: 0.4929 - val_loss: 3.0246 - val_acc: 0.5272
```

After training on the full training set the weights are all wrong. I use a very small learning rate as well: `full_model.optimizer.lr = 0.0000000001`