I am trying to build a model for sequence tag classification. X_train is of shape (2560, 69), Y_train is (2560, 69, 3)
This is what the architecture looks like:
m = Sequential([
Embedding(vocab_size + 1, 300 + no_of_pos_tags, weights=[embedding_matrix], input_length=max_len, trainable=False),
Bidirectional(LSTM(150, activation='relu')),
Dense(150, activation='relu'),
Dense(max_len, activation='sigmoid')
])
m.summary
:
Layer (type) Output Shape Param #
=================================================================
embedding_5 (Embedding) (None, 69, 334) 1699058
_________________________________________________________________
bidirectional_5 (Bidirection (None, 300) 582000
_________________________________________________________________
dense_9 (Dense) (None, 150) 45150
_________________________________________________________________
dense_10 (Dense) (None, 69) 10419
=================================================================
Total params: 2,336,627
Trainable params: 637,569
Non-trainable params: 1,699,058
The loss is categorical_crossentropy.
When I try to fit my data, this is the error thrown:
Error when checking target: expected dense_10 to have 2 dimensions, but got array with shape (2560, 69, 3)
I tried adding a Flatten layer before the first Dense layer but that results in Input 0 is incompatible with layer flatten_3: expected min_ndim=3, found ndim=2
I am not sure how to fix the dimensions. Has anyone ran into this kind of an error before?