Hi guys, I’ve created two gists but for me they are not rendering when I view the page; either way you should be able to download the notebook and view it locally.
update: find both files here https://github.com/mb4310/ts ; the gist was not working
The first demonstrates how ULMFiT-type approach works in the context of time-series classification. A couple of comments: I have not experimented extensively but have found the results are pretty much consistently worse across the board than the convolutional approach, perhaps because there is not enough time-series to train a strong forecaster on (my experience is approach works well on domains with 1000s of training time-series that are quite long). Anyways hope you find it interesting!
The second repackages the work done by @oguiza using a CNN and transformation for classification. I have included it because it automates the transform process and works seamlessly on any of the UCF univariate time-series ‘out-of-the-box’ (you basically just give the name of the dataset and the transformation you want and run three cells) in case anyone would like to experiment further and compare results on different datasets.
I am working on experimenting with an approach similar to that taken in the papers shared by @rpicatoste above; basically train a separate forecaster for each separate class, and given a new time-series have all the forecasters try to predict, measure the errors and either train a classifier on the different error timeseries or else just pick the class with the lowest MSE. Should have that up by later tonight/tomorrow.
Next I want to add continuous wavelet transform to the list of transforms available in the CNN notebook just to see how it compares. The problem is there are some parameters you generally have to tune which will change from problem to problem and I’m trying to find a good “rule of thumb” so that the notebook is still easy to run without having a background in sig proc.
Finally I’d like to add the following functionality to the transform notebook which I think will ultimately yield the best results: multi-image classification. So take exactly the lesson-1 approach but instantiate several resnets (well, minus the last few layers), feed each one a separate image, concatenate the output of the cores and train a classifier on top of the concatenation. This will require tinkering a bit with the fastai library or else just some ugly hacking and it could be a couple days.
Anyways, welcome any and all feedback— hope someone finds some of this helpful, cheers!
EDIT: The gist was not loading properly even when I downloaded the data and try to run locally… It worked it seems when I uploaded it directly to a git repo so I have uploaded it to mine, if you guys would like I can upload it to the repo created by @oguiza as well.