Kaggle NLP Competition - Toxic Comment Classification Challenge

I’m struggling with figuring out how to use the language model I trained to make predictions on multiple labels. I’m having two problems I’ve spent a couple days on.

  1. Creating dataset splits that feed in multiple labels to torchtext. I created a custom dataset that takes in dataframes and creates a different field for each label (similar to this post: Creating a ModelData object without torchtext splits?). Is this on the right track? Or should I be feeding in a list of six numbers directly to the label field for each example? Am I on the right track? I’d post code, but I’m not sure if that’s allowed because this is a Kaggle competition.

  2. Modifying the model decoder to output 6 predictions instead of one. As per this thread (Question on labeling text for sentiment analysis) I modified PoolingLinearClassifier to output the sigmoid of a 6 output units. Is this on the right track? I’m still not sure how the model will know what type of loss to use or which of the fields from the splits will be treated as labels.

Anyway, any help on this would be much appreciated! Is this way simpler than I’m making it? I feel like I’m missing something here!

1 Like