Disclaimer 1: This is for a uni related project, so if this against the forum policy I apologize and please ignore or delete this post.
Disclaimer 2: I don’t really need help with anything code related, I am interested in some ideas about the architecture to use for a NN.
So we have a device that has some parameters (15 actually) that can be adjusted (e.g. temperature). The parameters get changed every 10 seconds and after 1 hour, we get an output (based on all these parameters). We want to build an AI able to predict this output for a given set of parameters. So the data looks something like this:
t1_1 p1 p2 ... p15 t1_2 p1 p2 ... p15 ... t1_360 p1 p2 ... p15 output1 t2_1 p1 p2 ... p15 t2_2 p1 p2 ... p15 ... t2_360 p1 p2 ... p15 output2 ...
Initially I thought I can do something similar to Rossman dataset, but there we had an output after each line, while here the output comes every 360 lines (note: the parameters are both categorical and continuous variables) so I don’t think that would work. Another idea I had was using an RNN, but if I use a bptt of, let’s say, 70, I don’t have what output to compare with, because I have no output after the 70th iteration and using a bptt=360 I think would be too much. I could just try to put all the data from one run in one vector, which would be 15*360 = 5400 elements long and hope I can train the NN to give me the output based on this, but I doubt i can do much trying to predict one number based on 5400 elements (even more, if I am to use embeddings for the categorical parameters). Can someone give me any other ideas that could work better for this kind of problem? Thank you!