Well, I’m still doing something wrong. Somehow I am not installing the correct packages. I think this is the issue because I am popping an error on not having nbdev. So then I add !pip install nbdev and then get another package error and so on until I get to TSDataLoader which can’t be !pip install
from fastai import * from fastai2.basics import * from fastseq.all import * from fastseq.nbeats.model import * from fastseq.nbeats.learner import * from fastseq.nbeats.callbacks import *
do you have any idea why the training/validation loss is nan? Although I tried to fill in the missing values manually - without going through FillMissing, I still got NaN.
Can someone point me to an example or blog post for multivariate time series forecasting using fastai, wherein we can pass in other categorical column like day of week as well …
I looked in the fastseq example but that is a univariate example. I have 2 months of data and I need to predict for next fifteen days.
You might check out Amazon Labs’ time series forecasting repo called GluonTS .
GluonTS uses Amazon MXNet (instead of Pytorch or TensorFlow). They implemented many state-of-the-art architectures ( DeepFactor, DeepAR, DeepState, GP Forecaster, GP Var, LST Net, N-BEATS, NPTS, Prophet, R Forecast, seq2seq, Simple FeedForward, Transformer, Trivial, and WaveNet). Many of them (DeepFactor, DeepAR, DeepState) also use categorical data (covariate variables) and use probabilistic forecasting
Awesome I tried it out and it works great. With n-beats is it possible to use multivariate time series? I’ve tried looking for examples and haven’t really found any
Hi, I am amazed to see forecasting implementation in fastai2. I ran the ‘index’ or ‘overview’ notebook of fastseq without any problem. But, I couldn’t understand,
Train - Validation - Test split strategy
Why season = lookback+horizon is taken as hyper-parameter for nbeats_learner?
Why the lr_find plot is not getting it’s loss increased rapidly after a certain learning rate(lr)?
Does the learn.fit_flat_cos is different learning strategy compared to fit_one_cycle?
It would be very helpful if you include this pieces of information in the documentation of overview.
Hi, very good questions. I’ll try to answer them as best as possible in the docs. However here already a couple of quick links to keep you moving:
Priyatham10:
Train - Validation - Test split strategy
The little documentation there is you can find here. I’ll try to find time to give a bit more examples.
Priyatham10:
Why season = lookback+horizon is taken as hyper-parameter for nbeats_learner ?
Season is the maximum Period for the SeasonalityBlock. There are more examples in the link. The default setting worked best for my data but it does help to tweak that one.
Priyatham10:
Why the lr_find plot is not getting it’s loss increased rapidly after a certain learning rate(lr) ?
No idea. I could speculate but I have not investigated the matter.
Priyatham10:
Does the learn.fit_flat_cos is different learning strategy compared to fit_one_cycle ?
Originally I also used the fit_one_cycle, but with the succes of Mish with fit_flat_cos i decided to give it a shot. (Imagenette/ Imagewoof Leaderboards) It did better only introduced more depencies that I wasn’t willing to put up with. In the end I removed that part. But I forgot to remove it from the Readme. I’m not sure if it still helps with the relu as activation. Here is a link to official documentation: https://dev.fast.ai/callback.schedule#Learner.fit_flat_cos
Thank you so much for the explanations. I tried the same approach on the airline-passengers dataset. But, the results are not satisfying. Is the architecture implementation here in Fastseq is currently capable of giving best results yet? I thought work is under progress to give state-of-the art results. If the state-of-the art implementation is already implemented for forecasting, please point me in that direction.