Hi
I’m working on a Time Series forecasting problem using fastai tabular dataset as instructed in the Rossmann challenge.
When using the same data and model’s parameters during two different training, I’m having a completely different result for a given point.
Thinking is maybe due to the randomized batch selection during training, my idea is to first fix the batch selection for different model training.
The following code lines will give different results when ran twice, how can I fix this?
Is there a parameter to fix this? I saw fix_dl=None but don’t know how to properly use it.
When generating the training loader, it’s shuffled and the last batch (if not complete) is dropped. If you try showing a batch from the validation set it should always be the same.
if the optional parameter, test_df=df_test is given in TextLMDatabunch and TextClasDatabunch, how can we load a pretrained model and ask the lrnsavedmodel.getpreds() to use this df_test that has been given already.
I get a NoneType error when I use get_preds(ds_type=Dataset.Type.test)
However, when I add the df_test via load_learner or add_test, it works.
what is the point in having an option of specifying df_test in databunch if it cannot be used in predict unless added again via load_learner or add_test