Tabular progressive learning - experiences

Hi all,

I am playing around with fast.ai tabular learner to see how it works on Kaggle House Price and Rossmann datasets. In the last case, the dataset is much bigger and training time increased, hindering experimentation.

Let me share with you my experiences trying to apply progressive learning, that is, what Jeremy said about start experimental training using reduced image sizes, but applied to the tabular case.
In my tests I used very limited feature engineering - only add_datepart.

The steps I followed were:

  • Create baseline model using fit_one_cycle during 10 epocs using all dataset - very slow. Finetune model 2 additional epocs. Initial submission with a result of 0.17538.
  • Create two datasets: full dataset (1M) and small dataset (170k) - both with validation sets based on dates, not shuffling.
  • Create learner with the full dataset - so that embedding matrixes get properly initialized with the richer dataset.
  • Configure learner with data of small dataset (learner.data=small_train_df)
  • Fit the model using fit_one_cucle during 10 epocs - this was significantly faster than before.
  • Configure learner with full dataset.
  • Finetune the model with additional 1 epoc. Second submission with a result of 0.13595.

My feeling is that the progressive learning strategy is very transferrable to tabular scenarios to speed up training and facilitate experimentation. However, I have limited practical experience.

Is the progressive learning approach commonly used in the tabular scenario? If affirmative, is this the way it is normally applied?

Thanks very much!

1 Like