An update, as I expect this topic may interest many users.
Regarding item #2 above, I'm trying to do something as follows:
# First, build the model:
model = Sequential()
model.add(LSTM(64, return_sequences=True, stateful=False, batch_input_shape=(1, 5, 1)))
model.add(LSTM(32, return_sequences=True, stateful=False))
Then, every time a new data point
current is sent in, I do:
# 'current' is the new data point
# 'sequence' is a list with the previous 5 data points
model.fit(sequence, current, epochs=1, batch_size=1, shuffle=False)
# remove the oldest item from 'sequence' and append 'current' at the end of it
future = model.predict(sequence, batch_size=1)
However that doesn't seem to converge anywhere at all. Leave aside the structure (depth, # of parameters, dropout, etc) of the model itself, is this approach fundamentally wrong?
Regarding #3, using Keras I'm not sure how to save a model. Apparently this approach doesn't seem to change everything (e.g., learning rate, etc). Anyone has experience to share, please?