Welcome to our first Time Series Learning Competition: Earthquakes!

Ok, I’ve updated it. Thx!

Have run the exact same notebook as is swapping only the “Task” (even learning rate is the same) to Adiac and got 87.5 (SOTA on the webbsite says ~80). I will try to run it out-of-the-box on some other ones and keep you guys posted.

edit1: ChlorineConcentration got 91.3% out-of-box , previous on site says 84.57%. In all of these the only difference from the notebook i posted was fit_one_cycle was set to 10 epochs.

edit2: CricketX out of-box got 88%, posted on site best was 81.4%. I will stop updating these for a while and go hang out with family, would love it if someone could try to reproduce some of these to verify. Seems @oguiza idea for convolution approach was very strong and trying with several images at once has very strong results! Cheers guys, 'till tomorrow!

3 Likes

Hi all, I should have checked the thread :slight_smile: 96% is very impressive !!

Anyways, I ran a bunch of non-DL non-fastai baselines with varying levels of complexity. (xgboost, linear models, spectral features, NLP-like features, …)

The best I could manage in a couple hours is barely above 76% - slightly above all ones.

That said I didn’t really do any hyperparameter tuning so…

Anway, in case someone is interested, here is my baseline notebook

1 Like

Well done @mb4310! Your multi_image_classifier seems to work great with TS encoded as images, really promising!

In the first day of this learning competition we have beaten not just one, but several SOTAs in different TS datasets, and what I think it’s even more important we are learning a lot about how to approach a time series problem! (At least, I certainly am)

I’ll try to find some time to review your notebook.
I also have another related idea I’d like to test.

I have a question on the multi_image_classifier: I guess in the same way it can take multiple images from a univariate time series, it could take images for multivariate time series in different channels, couldn’t it? Is there a limit to the number of images the classifier could take at once? Or is the limit just RAM memory?

Note: As to the art, the UCR website is not up to date. There’s a paper published a year ago that I think better reflects the state of the art for all 85 UCR datasets. I think we should use it as a benchmark. In any case the results you have shown in several datasets are SOTA or very competitive.

3 Likes

It can an take as many images as you want; but how its structured now, each resnet core outputs 512 channels, and for each image we do a fastai ConcatPool which raises this to 1024 channels; with 3 images we’re up to 3072, have to be careful as we approach multi-variate time-series as my RAM is already taxed at that state and choose which images can provide the best lift with what channels. I would not be surprise if when we approach to multivariate time-series your idea of encoding an image per channel is the best way to go in many cases.

I’ve briefly gone through it and found it super-interesting! I hadn’t seen some of those techniques implemented before. It will be a good to benchmark DL to more established non-DL techniques, but it may be really difficult to beat @mb4310’s results. In any case, this is another great learning opportunity! Thanks for sharing your notebook! And please, let us know about your progress.

Great, thx! We’ll learn more in the next few days, But in any case it’s a great start! Keep up with the great work!

Hey guys, in response to @henripal I have re-run my notebook after going off for a little and rebooting it and i’m getting nonsense results; basically baseline across the board; i have no idea what might have happened or what I might have done wrong i didnt change any of the code and yet its throwing me nonsense results; apologies – I didn’t mean to post preemptively I’ll make it a priority to clean it up and see whats going wrong now and before next I post one; sorry that’s really embarrassing on my part. Will keep at it; all i can say is i didn’t mean to mislead anyone honestly my mistake if it was one; i’ll catch it if i can.

2 Likes

@mb4310, no need to apologize. I think what you describe is something that has happened to all of us. We get some excellent results in a problem, get excited, and then for some reason can’t reproduce the m again. That’s part of the life of a data scientist.
And there are learnings too in this. We are here to learn! Keep up with the good work. But take your time to enjoy your holiday!

3 Likes

Having learned how important it is to watch out for class imbalances :wink: , I wanted to share my new default top-of-the-notebook one-liner for class balance. (I love one-liners…)

df[0].value_counts(1)

will give you

0.0    0.748201
1.0    0.251799
Name: 0, dtype: float64

as output (where df[0] is the label column of your dataframe).

What does this do?
the value_counts() method normally gives you the numbers of unique items:

0.0    104
1.0     35
Name: 0, dtype: int64

value_counts has a parameter called ‘normalize’, so value_counts(normalize=True) will give you percentages of the unique values. As normalize is the first parameter, it can be omitted, and 1 is truthy in python, so we can shorten it. (not very pythonic, but less typing…)

df[0].value_counts(normalize=True) == df[0].value_counts(1)

5 Likes

:+1:

I would just add that in some problems it may be important to have that count split by train, val and whenever possible test (like in Kaggle with a 0s or 1s submissions), as the distributions might not be the same.

1 Like

Of course, but you can simply call this on the two seperate dataframes or even with randomly sampled val sets from the same df you can call it on the subset by index. But yes, you have to have at least 2 lines then :wink:

1 Like

Thanks for the link to the paper!
Going to try to reimplement their architecture; I’m super surprised with the “dimension shuffle” layer they use. They take a univariate time series with N steps and treat it as an N-variate time series with one time-step, and then run that through an LSTM. Very strange. It works I guess. Any thoughts?

On the table are also reimplementations of ES-RNN as well as the WaveNet-style Temporal Convolutions -> LSTM (haven’t found results for these on the UCI datasets).

1 Like

That’ll be great. It’s supposed to work very well, but I haven’t implemented myself. It’s something I had in mind, but I’m very happy you take the lead on this. I’m not sure you’ve seen it, but the code they used is available in GitHub. It’s in Keras.
They also have a way to visualize class activations maps that I assume is similar to the work you made. You’ll let us know what you think.

I have not analyzed it in detail, so I can’tt explain, but I’ll read the article in detail and will provide any insights I get.

I think in the repo they only build the FCN model and compare results to other published data.
The other paper you may be interested in is this paper (Fast and Accurate Time Series Classification with WEASEL).
Look forward to reading your findings!!

1 Like

I follow your steps, get the error and don’t know how to solve it.

<ipython-input-99-88f4e35f7001> in __getitem__(self, idx)
     28 
     29     def __getitem__(self, idx):
---> 30         return self.x[:,:,:,idx], torch.Tensor([int(self.y[idx].cat)]).long().squeeze()

AttributeError: 'Category' object has no attribute 'cat'

There was a change in the fastai lib. so you have to remove the .cat in the __getitem__ method of the class in the notebook and that gets rid of the error. So it should just say self.y[idx] there.

1 Like

From my own experience, you might have mistakenly used data from training set as your validation set. Happened to me once when I deleted some images which basically changed the (random) training and validation set, so previously trained images got into new val set suddenly giving extremely high accuracy.

Potential approaches to Time Series problems

I think it would be good to have a discussion about possible approaches to tackle time series problems.

To start this discussion, I thought it’d be good to share with you a summary of insights, papers, etc I’ve been gathering over the last few months. All these options come with their corresponding code, so they shouldn’t be too difficult to implement and see the results.

In general, there are DL and non-DL approaches. There’s still a lot of debate about whether one approach works better than the other. I think it might depend on the problem. I’ll focus more on DL approaches, but would be happy to hear other with more experience in non-DL approaches. (Note: there are some good examples of how to apply non-DL approaches by @henripal code)

Within DL there are 3 main approaches:

  • RNNs (LSTM/ GRU)
  • CNNs
  • Hybrid models

RNNs have been traditionally used for TS problems, but CNNs and Hybrid models have shown higher performance in many cases. Jeremy predicted in Mar 2017 that by the end of the year CNNs might take over LSTM/GRU for time series, and I think that’s probably true.

Here are some of the approaches I consider more interesting:

RNNs:

  1. Vanilla/ stacked/ Bidirectional LSTM/ GRU
  2. Dilated RNNs : Dilated Recurrent Neural Networks (S. Chang, NIPs 2017) TF code

CNNs:

  1. Transfer learning applied to time series images (ts —> image —> resnet):
    1.1. Single image: 1-3 channel images (an encoder per channel) in a single resnet, @oguiza Pytorch code
    1.2. Multi-image: 1-3 channel images (an encoder per channel) in parallel resnets @mb4310 Pytorch code
  2. Training from scratch:
    2.1. Tiled Convolutional Neural Networks: Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks (Z. Wang, 2015) code
    2.1. Temporal convolutional network (TCNs): An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling, (S. Bai, 2018), Pytorch repo
    2.2. TrellisNet (modified TCN): Trellis Networks for Sequence Modeling (S. Bai, 2018), Pytorch repo

Hybrid models:

  1. DeepConvLSTM: Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition (Ordoñez, 2016) Keras code
  2. LSTM Fully Convolutional Network (Temporal convolutions + LSTM in parallel):
    1.1. LSTM Fully Convolutional Networks for Time Series Classification (F. Karim, 2017), current state of the art in may UCR univariate datasets, Keras code
    1.2. Multivariate LSTM-FCNs for Time Series Classification (F. Karim, 2018), current state of the art in may UCR multivariate datasets, Keras code

Please, let me know if you have questions on any of these approaches, or if you are missing anything you think it’s interesting. Look forward to receiving your comments.

20 Likes

Thank you, this is very helpful :slight_smile:

Hi! Here is my post thanksgiving dinner implementation of the FCN-LSTM. Not surprisingly it doesn’t get to 83.5% accuracy yet (barely gets to 76%)

FCN-LSTM notebook

Will reread when I get some time but if anyone wants to proofread I’d appreciate it :slight_smile:

1 Like