Combining vision and tabular models in v2


I would like to build a model, that combines e.g. image and tabular data (example dataset I saw a couple of notebooks for v1 but couldn’t find any for v2. Has anybody done that already? If not could you give me some directions how to build:

  1. a dataloader that uses ImageBlock, TabularBlock, CategoryBlock (how do I provide two get_item functions? how does the dis.dataloader(path) work with two blocks?)

  2. a model that combines the image and tabular data
    I guess I have to build tow custom pytorch models, cut the heads, and add a custom head that combines the two models?!

  3. a learner that works with the learner and model

Thanks Florian

1 Like

I’ve only done this for v1 and haven’t tried got v2 yet. However there is documentation for custom nets requiring multiple inputs, i.e., Siamese networks.

1 Like

Did you try passing three different functions to getters in DataBlock.

Check this thread.

1 Like

thanks for the links. Now that I figured out how to pass more than two get_x (by using getters) I found that there is no TabularBlock. There is only a TabularDataloader and TabularPandas. I couldn’t figure out how to use them to build a combined Datablock or Dataloader. So if anyone has ideas on how to do that (@muellerzr maybe? :wink: ) please let me know.

1 Like

There’s a link where someone attempted to combine the Tabular with Text. You can’t data block it right away because TabularPandas isn’t a block (and isn’t really anything like what the API is, it’s kinda floating separately). I’ll find it in a moment and edit this post


Found it @florianl

However, in general the API will let you use any number of inputs and outputs. When using the high level datablock, specify n_inp=2 for two input blocks (being the first two you pass in)


Thanks! Thats a good starting point. :slight_smile: