Time series/ sequential data study group

So if I’m understanding right, the big difference here is the model isn’t learning the convolutions themselves, it’s generating a bunch randomly as a one-off, then just learning how much weight to place on each one. So in effect it only has to actually learn one weight for each convolution, rather than many.

Yes, the convolutions are only used once to generate the features. Then you train your favorite Classifier/Regressor on these features. Good regularization is needed as you have so many features.

Indeed it is a great idea, very fast and very effective !
Angus (the first author) is a colleague and PhD student at Monash University (Melbourne, Australia).

I will be meeting him next week during my visit to Monash and I will definitely talk to him about this forum.

And great work @tcapelle by adapting to multivariate!

1 Like

Reading through the history of this thread, I noticed there have been several competitions on specific TS problems. To my knowledge we don’t currently have a competition going. Would anyone be interested in starting up a new one? Personally I find the competition format really helpful for learning.

1 Like

There is one related to time series regression organized by the European Space Agency, about collision avoidance. Here it is:
https://kelvins.esa.int/collision-avoidance-challenge/

I am not sure to what extent it requires more domain knowledge that machine learning to get good scores though. But anyway it is really interesting.

2 Likes

Multivariate ROCKET on GPU (Pytorch)

I’ve spent a couple of days using ROCKET and have created a Pytorch version to overcome the lack of multivariate and GPU support.

I’ve run some tests and can confirm that GPU support really help speed up the feature creation for medium/ large datasets.


I’ve also tested my new ROCKET implementation on all 84 UCR univariate datasets and got the same results as the ones published in the paper.

And I’ve run the same code on the UCR multivariate datasets, and the results also beat the best published results by a large margin, and much faster. Here’s for example the comparison to InceptionTime (one of the best DL models for TSC):

I think you may be interested in this @hfawaz.

I’ve shared a notebook where you can learn how to use the original version of ROCKET, as well as the new Pytorch version.

Once the ROCKET features are created, you can then use any classifier you want. In the notebook I show how you can use RegressionClassifierCV (as in the paper), or integrate it with fastai, to use for example a Logistic Regression.

16 Likes

Again, I cannot thank you enough for your work, @oguiza!!! It is amazing how you guys work on providing a straightforward way of testing SOTA algorithms like this new ROCKET. It looks really promising!

1 Like

Rocket on GPU! Great work @oguiza, many thanks :sunny:

1 Like

Wow, fantastic work! This is a really exciting development!

The Pytorch implementation in the notebook was throwing some errors for me but I was able to get it running with the following changes.

weight = torch.normal(0, 1, (1, c_in, ks)) -> weight = torch.randn(1, c_in, ks)

and

_max = out.max(dim=-1).values -> _max, _ = out.max(dim=-1)

Thanks @GiantSquid!

That’s really strange. I’ve tested both the original version and your version and both deliver the exact same result. And I’ve tested them on Pytorch 1.2 and 1.3. What version of Pytorch are you using?

I’ve changed my original implementation from torch.normal(0, 1, (1, c_in, ks)) to weight = torch.randn(1, c_in, ks) as it’s less code, and works marginally faster :wink:

Aha, I was running it on Paperspace and had an old Pytorch (1.0).

Ok. I think that’s probably the reason for it.
Thanks for raising this anyway. At least, we know now that ROCKET GPU works on Pytorch 1.2 and 1.3.

Just a short message this time to say thank you to everyone for taking the time to try ROCKET and, in particular, to @tcapelle for adapting ROCKET for multivariate time series, and to @oguiza for developing a (multivariate) PyTorch GPU implementation (and also to both for inviting me here).

I’m interested to know from anyone / everyone whether ROCKET has been useful for you, any issues you might be having, etc., and I’ll do my best to help where I can.

There are a few things on my to do list, including providing some better documentation to make ROCKET easier to use (esp. for tricker cases such as very large datasets, variable-length time series, etc.).

Please don’t hesitate to get in touch if you have any questions or comments. I’ll try to respond quickly (but I might be a bit slow in responding for the next couple of weeks).

Best,

Angus.

9 Likes

Fab, thanks Angus, all the great work is much appreciated

Thanks a lot @angusde for joining the fastai community, and welcome to this thread in particular.

(For those of you who are not familiar, Angus is the leading author of the recent ROCKET paper. ROCKET has beaten the state of the art in one of the main univariate time series benchmarks)

It’s an honor to have world-class researchers like you and @hfawaz participating in this thread!

6 Likes

Thank you for your wonderful research @angusde !! Just one question: You mention irregular time series. As for now, does ROCKET support them?

Thanks @oguiza, we are honored to be here, what you have assembled is beyond great.
Let’s hope we can advance the field by learning from each other!
Cheers,
Hassan

3 Likes

@oguiza did it again, the rest of my day will be wasted :rofl: reading the implementation of features/kernel GPU implementation.

1 Like

This is impressive. Would it be possible to adapt it for regression (forecasting)? If not, do you know which are the SOTA Deep Learning approaches for multivariate time series forecasting with an univariate response? Thanks

I agree with @hfawaz. Thank you for inviting us here.

I would never describe myself as a world-class anything! I am just another PhD student. I hope ROCKET is useful in the real world, and I hope I can make myself useful here. I think I can probably learn more from you than you can from me…