Pop(layer) leaves the weights. can they be deleted?

Early in the course we used pop to remove the last layer of 1000 neurons and replace with 2 for dogs and cats. However this leaves the weights from the old version of the layer.

For example the code below shows the weights for the last layer as (1234, 2) even though the 1234 layer was popped. This makes it unpredictable whether the model is actually going to train properly. Is there a way of popping the weights along with the layer? [note I am using keras2 though suspect it is the same]

model = Sequential()
model.add(Dense(4096, activation='relu', input_shape=(32,32)))
model.add(Dense(1234, activation='softmax'))
model.layers.pop()
model.add(Dense(2, activation="softmax"))
model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
for layer in model.layers:
    print(layer.get_weights()[0].shape)

Just spotted I did model.layers.pop(). Changing to model.pop() solves the problem.