Tabular Transfer Learning and/or retraining with fastai

(Jeremy Easterbrook) #1

Hello to all the amazing Fast.ai community ! I love to help but now in the position where I would like to question some more advanced of you that have worked on similar issues of Tabular.

For my job, I successfully developed Tabular models with Fast.ai that use over 60 categorical features(country, device, etc.) and around 20 continuous features (i.e datepart). These models are now in Production but we are looking to.go further with several new values added each day, requiring us to retrain models.

I was really inspired by the performance and progress done by transfer learning inside of vision (Resnet) and text (ULMfit), but have not seen any research on tabular.

Similarly to work done by Pinterest and Instacart, I would like to reuse the fast.ai categorical embeddings to train new models with less datapoints or similar problems. Exporting the PKL, extracting the weights is simple…

But how to prune the model, and load it inside a new model; while keeping the categorical cat_codes in the same order and efficiency.

Alternatively, we could simply retrain models from scratch all the time, but we feel that would be a waste of computing…or we could load the .PTH file but that does not seem efficient to store on AWS and still does not tell me how to add the new DataBunch.

I’ve followed 2018 FL pt1&2 and DL 2019, I researched several times the forums for different keywords, as well as Google, Github, to find a clear way to do it.

Would extremely appreciate some help !

1 Like

#2

It’s a bit tricky if you have new categorical codes as it will require you to change the embeddings. There is no pre-written function in fastai to help, but you should have a look a the function load_pretrained in fastai.text.learner, as this function matches word ids from one old vocab to a new one and create the corresponding embedding matrix. You would need the same for all the categorical variables.

As for not loading the pth file, there is no workaround that for now. You can implement some pruning probably, but there is nothing like this in fastai.

2 Likes

(Jeremy Easterbrook) #3

Thanks for the pointers, I will research and share progress.

I think there is an opportunity here. Could producing general-purpose categorical embeddings (categories, products, geos, datetime, etc.) for usage in general areas offer faster converging and better performance ? I see this the same way the ULMFiT language models are being used today.

1 Like