Variable Length Sequence to Variable Length Output Sequence Learning Advice

I am trying to build a model in Keras that has a variable length number of timesteps, and has a variable length number of outputs. Specifically, the data I am working with is a database of customers, where I have historical information about their activity for the last 3 years, some customers are new and some are old so I have varying length information about them. I want to be able to generate a prediction of how much each customer will spend over any arbitrary time horizon of my choosing.

I have done some research, and am wondering if anyone has any advice on the following:

Model Architecture:

  1. Should I simply use zero padding and then use masking to handle the variable length inputs? Does this mean I also need to zero pad the outputs?

  2. Alternatively, I suppose I could set the RNN to stateful and have a batch size = 1, and reset the state after the end of each unique customer’s history. This seems slow as I would be feeding in only 1 row at a time.

  3. A hybrid approach - bucket my data into similar lengths and for each bucket set batch_size = bucket length, then apply zero padding and masking. However, I suppose this means I need to train a separate model on each bucket.

Making Predictions

What is the beset way to make variable length predictions? I see a couple of ways:

  1. feed the predictions back into the model and keep generating output

  2. I understand that statefulness can also be kept at prediction time as well, is that true? If so is this a good approach and does anyone know more about this?

Thanks!

1 Like

I also posted on the Keras github repo. Some helpful answers here:

Is this going to be covered in part 2?