How do we handle different input lenghts, i.e varied scentence/document lenghts while doing it? (Using tensorflow).
since the graph is build based on the shape of the placeholder we use to initialize the graph, and since we are calculating lcontext and rcontext here whose dimensions depend on input length. Each time i get a shape error when i concatenate it lcontext the seq representationa and rcontext, since the placeholder dimensions are different than wat we input in feed_dict to run the training process
So ya, ive used tf.placeholder(tf.float16, shape = [100,None]) - (100 is the word embedding dimension) so basically the document with embeddings is fed into the placeholder ( 100 x N - 100 word embedding size N - number of words)
the problem is the next part, the palceholder doesnt give the error,
if u read the paper we need to calculate the left context of a given word which = W(1) * lcontext(prev word)+ W(2)*embedding of prev word. this means we need to loop it through the given seq, which is the 2nd dimension of the placeholder, so when u give it None to initialize the graph it pops up an error.
So if u just have a condition to say use the 2nd dimension of it or some random number to run the loop when u instantiate the graph using the placeholder, the dimension of the lcontext gets fixed, and the next time when u input a seq/document of length 100 it cant concatenate because the document representation is 100 x N while the lcontext remains at 100 x (random value which we chose if 2nd dim is None)
I did not have time to read the paper. Here’s another thought. Maybe you could use fixed dimensions everywhere and just add padding to make sure everything fit. I’ll try to read the paper later today.
Just use None for the relevant dimension and use a GlobalAveragePooling layer before the end to create a fixed length output (which can then be fed into additional layers).
You can also try something along the lines of Spatial Pyramid Pooling to convert an arbitrary length input into a fixed output without losing all positional information.
Haven’t tried that with text data but it could be interesting.
EDIT: Ok, not exactly easy since a lot of Keras assumes fixed size inputs, particularly the batch generators. But with a bit of troubleshooting and custom code it works.