Time series/ sequential data study group

development of methods to isolate the relevant signals in these convolutions will be highly productive

… Eagerly looking forward to it.

Authors , Thanks so much for your work.

PredictionDynamics for time series

Hi,
I just wanted to share a new, brief tutorial notebook I’ve uploaded to the tsai repo to show how you can now use the new PredictionDynamics callback to visualize the model’s prediction during training.
You can use it with any DL model (not just time series), in classification or regression tasks.
I think it’s useful to get a better understanding of how training is progressing.

This is the type of output you’ll get during training (it’s updated at the end of every epoch):

5 Likes

wow that’s really cool, thanks!!! I love this kind of interpretability tools! Btw can you share the blog post of Mr. Karpathy? The link of the notebook is broken :wink:

Would it be possible to see the evolution of that plot after the training has ended? Is it stored somewhere?

1 Like

Thanks for your feedback @vrodriguezf!

Here’s the link (I’ll fix it on the notebook).

I don’t store the evolution anywhere as it’d probably slow down the process considerably. It takes 250 ms to update the chart, but to save is probably much slower. I’ll check it anyway. Something we could have though is an option to save it, knowing it’ll make training slower. What’s your use case for storing the evolution? How would you use it?

1 Like

Probably it only makes sense if behind of your training you are using an experiment tracking tool like weights & biases. That tool does exactly that, log things (e.g. the activations of your layers) for each training step, to allow a easy navigation through them afterwards.

I can think about and make a PR if I manage to spare some coding time :slight_smile:

2 Likes

Meeting with Angus Dempster - Rocket, MiniRocket, and MultiRocket

Hi all,

I’d like to invite you to participate in a web meeting we’ll have with Angus Dempster next week (@angusde ).

For those of you who don’t know him, Angus is a Ph.D. student at Monash University in Australia (a world-class group in time series research) and is one of the authors of several outstanding papers:

As you probably know, the ROCKETs have made a very significant impact in Time Series Classification and Regression. They are not only incredibly fast but have established a new SOTA!

Interestingly, the ROCKET’s use a very different approach compared to all other algorithms.
If you want to learn more about how they work and want to take the opportunity to ask one of the top researchers in the area of time series come and join us!

Date/ Time:

  • February 10th (Wednesday) from 5:00 to 6:00 am Australian Eastern Daylight Time
  • February 9th (Tuesday) from 7:00 to 8:00 pm Central European Time
  • February 9th (Tuesday) from 1:00 to 2:00 pm Eastern Standard Time

If you are willing to participate, please reply to this blog post indicating so and I will forward you the link to the meeting through the forum’s email. We’ll use Google Meet.

6 Likes

i’m interested please send invite

I am also interested - please send the invite.

I am interested. Kindly send the invite.

Hi oguiza hope all is well!

Could you send me an invite also?

:smiley: :smiley:

I’d love to join this!

Hi @oguiza
I am also very interested.
Can i get an invite? :slight_smile:

would love to join :slight_smile:

Hello @oguiza.

I am excited to join as well!
Would you please kindly send me a link?

Thanks for organising it (and this amazing forum and tsai)

I am interesting in participating. Please send the invite!

thanks for arranging this meeting. I’d like to participate :).

I’d like to participate. Thank you.

I would like to participate :slight_smile:

Hi, I am new here. And I really like the contents, it is very informative. However, I have a doubt I was going through inceptionTime paper, and other papers also, I just wanted to ask if I am using convolution neural networks over time-series then convnet will treat the time series as a bag-of-seq(if I am just applying over time-series) as convolution neural networks are translational invariant(in the case of computer vision) thus in the case of time series data convnet will become time-invariant. And I also remembered someone has mentioned uber’s paper on cord-convnet, where they are encoding the position coordinate to make convnet translation-invariant(correct me if I am wrong), similarly in Transformers NN there they use positional encoding to avoid this issue(in the case of NLP it is bag-of-words). So using positional encoding, will it help the model? @hfawaz.
sorry If I have mistaken something :slight_smile:

Interested :slight_smile: