Tabular Model with NLP text features

Is there a way to extend the tabular model with costume NLP text features.

Suppose we have a data frame of real estate characteristics “tabular characteristics” -e.g. square meters, age, condition, price …, and a detailed text description of the estate. Example of such dataset can be found HERE (I have created this dataset, Czech language of description)

I would like to have one deep learning model that would use tabular features as well as text features.

One workaround would be to train the separated ulmfit model on estate descriptions and use a model encoder.

I have created a wrapper for that:

class Text_Encoder(TextPrep, BaseEstimator):
    
    def __init__(self, learn):
        self.learn = learn
        self.dls = self.learn.dls
        self.numericalizer = Numericalize(vocab=self.dls.vocab)
        self.tokenizer = self.dls.tokenizer
        self.awd_lstm = self.learn.model[0].eval()
        
    @classmethod
    def from_learner(cls, learn):
        return cls(learn)

    
    def _process_doc(self, doc):
        return self.numericalizer(self.tokenizer(str(doc)))

    def _encode_doc(self, doc):
        xb = self._process_doc(doc)
        xb = xb.reshape((1, xb.size()[0]))
        self.awd_lstm.reset()
        with torch.no_grad():
            out = self.awd_lstm(xb)

        return out[0].max(0).values.detach().numpy()
    
    def encode_single(self, text):
        return self._encode_doc(text)

Where this model would take use a pre-trained model and convert text columns into a vector. This may then be used to train the separated tabular models with only tabular features.

However, I want to use text features directly in one model and not this workaround. Is there any way to extend tabular learner with nlp components within the fastai ecosystem?

6 Likes

Definitely watching this thread because this is exactly the use case I would like to apply as well. I was thinking that I might have to do it in two steps, train an NLP model on the text only to generate predictions and then combine these with the tabular features. I have not yet seen an integrated example with fastai but I would be surprised if no-one in the community is working on an integrated multi-modal approach.

2 Likes

Thanks for sharing with us about this interesting dataset. I wanted to experiment with open multimodal datasets.

I am having a hard time to find examples in docs.fast.ai itself for this type of MultiModal scenario.

Two approaches I would experiment with

  1. Turning all features into text ie turning these categorical and numerical as string and concatenating them into a text column as a single input.This would turn the multimodal into a simple text problem.

Eg: Dresses [SEP] General [SEP] 34 [SEP] 5 [TEXT]

  1. Changing the head of the model to combine the features from numerical & categorical columns as described in multimodal toolkit [2]

model_image

Below resources may guide you more

[1] Combining Categorical and Numerical Features with Text in BERT · Chris McCormick
[2] Multimodal Transformers Documentation — Multimodal Transformers documentation
[3] Combine BertForSequenceClassificaion with Additional Features - 🤗Transformers - Hugging Face Forums

3 Likes

Alternatively, there are already approaches using torch on kaggle (5th Place Solution Code | Kaggle, 5th place for the petfinder project, here is an overview PetFinder.my Adoption Prediction | Kaggle) as well as at least one fastai approach (GitHub - EtienneT/fastai-petfinder: Merging image, tabular and text data in a neural network with fastai - most likely fastai v1).

Otherwise, I think widedeep from pytorch (GitHub - jrzaurin/pytorch-widedeep: A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch) should be a good start.

1 Like

You can easily use autogluon for such data: Multimodal Data Tables: Combining BERT/Transformers and Classical Tabular Models — AutoGluon Documentation 0.5.0 documentation

This paper: Benchmarking Multimodal AutoML for Tabular Data with Text Fields

compares a bunch of strategies for modeling such data and finds the one used in autogluon can give really high accuracy (it even gets 1st or 2nd place on historical leaderboard of numerous ML competitions from Kaggle & MachineHack)

2 Likes

From Discord

1 Like