I have been wracking my brains for weeks on this question so I was hoping someone here could help.
I am trying to predict sales across 1000 different stalls, all temporary, so none of the sales are the exact same. Once the stall is closed that’s it for that stall.
So, my dilemma is how to split up the data.
If I was working with one stall I could do something like an LSTM. But then I would have to train 1000 different models.
But if try to put all the stalls into one model I can’t do LSTM and have to do something like a categorization or regression instead.
So in the first case, the data is split up into sliding windows within every stall and a separate model created for each stall (lots of work). In the second case, the data is split up into groups of stalls and the model is generalized to say something about every stall. I don’t even know if deep learning can do that.
I don’t know which one to pick. Maybe there’s a third way?