Time series/ sequential data study group

Hi @hfawaz,

Welcome to our study group! It’s a priviledge to have a Time Series world-class researcher joining us!

I hope you’ll find the experience as useful and rewarding as I have. I can say that for me the fastai community’s been the best learning and collaborative environment I’ve found in the area of ML.

I’d really like to thank you and the rest of the team for the quality of work you are producing and for openly share your code. I think you’re raising the standard of research in TS.

I also work in the area of Time Series Classification and Regression (not Forecasting), mainly with multivariate datasets.

I have a few comments on your previous post:

  • InceptionTime: I read your paper when it was public, found it super interesting, so I created a pytorch version. I’ve been using it for a couple of weeks and results on my own datasets are better than with ResNet. So thanks a lot for developing it! Personally I think that the idea of using larger receptor fields goes in the right direction. I’m building a Practical Time Series repo that I’ll be able to share either today or tomorrow that contains all that is required to train TS models with fastai, as well as a collection of some of the state-of -the-art TS architectures (FCN, ResNet, ResCNN, InceptionTime, etc). I’m currently investigating ways to improve performance of the InceptionTime network applying the fastai framework.
  • Imaging Time Series: I’m with you and Jeremy that the encoding of TS seems like a waste of time, since all the information is contained in the raw data. However, I’ve seen that in some datasets, imaging works really well, even if the dataset is tiny, as you can benefit from computer vision transfer learning. I have tried multiple encodings (Gramian, MTF, RecurrencePlots, Wavelets, etc) with mixed results. I believe that in the end raw input models should prevail, but it’s also true that our brain is far better identifying patterns based on charts that on numerical data.
  • Recurrent models: In all comparisons I’ve made, I’ve always found CNN models far superior to RNNs, and they are much faster to train. I gave up on RNNs some time ago.
  • Regression: I’m also working in this area, but my datasets are proprietary, so I cannot share them. Sorry about that!

Just to give you an idea, here are few areas I’m currently testing in the area of multivariate TS (everything using fastai):

  • Impact of LSUV (and related) initialization
  • New optimizers (like Ranger, developed by some great fastai colleagues - thread)
  • New activation function (also developed by some great fastai colleagues - thread)
  • Data augmentation: cutout, mixup, cutmix,…
  • Semi-supervised learning: mixmatch, uda, s4l
  • Training: progressive resizing
  • Ensembles vs multi-branch models vs hybrids
  • New hybrid Time-Frequency models
  • Inception architecture tweaks: ’bag of tricks’
  • Visualization of activations

I’ll post any significant insights I get during my experiments.

I’m more than happy to discuss any of this with anybody who’s interested. I’ll also create notebooks to demonstrate this functionality.

3 Likes