Kaggle NLP Competition - Toxic Comment Classification Challenge

Check the Kaggle forums - there’s a comment with hundreds of exclamation marks that’s killing spacy.

2 Likes

I see… beware of test data id = 206058417140

Thanks for the heads up!!!

I someone want to start with a kernel, here is my contribution, mostly inspired from the Jeremy’s kernel

https://www.kaggle.com/devm2024/cnn-lstm-eda-lb-0-067/

3 Likes

@jamesrequa Nice to see you there, bro !

2 Likes

Hey @sermakarevich, nice to see you too! You seem to make a habit of leap-frogging me haha

3 Likes

Hehhe , you started it :slight_smile:

I do not know what was the chance to find you in random competition on kaggle and manage to get score ±4 places around… I was really surprised )

Hi @jamesrequa and @sermakarevich,

Nice work for both of you!! I’ve been looking at this competition and am leaning towards doing binary classification for each category rather than trying to do everything at once. What did you guys do?

2 Likes

Hi @hiromi. Glad to know you are there as well. I am complete newbie at NLP so I just try to learn and implement from scratch everything guys recommend to do on the forum:

  • tfidf on words
  • tfidfs on chars
  • naive bayes features
  • LSTM
  • GRU
  • fastText

Sklearn pipelines help a lot to make your code clearer and automise lots of stuff. You can basically wrap anything into sklearn Estimator.

10 Likes

Thanks for the tips! Wow, it’s impressive you tried all that. Can’t wait to hear all about what kind of findings you made once the competition is over :slight_smile:

1 Like

Hi everyone,
Thanks for the tips guys. I just hit 0.9835 using Bi-Directional GRU and Glove word Embeddings…


Anyone interested in form a team from fast.ai community ?

5 Likes

Hi Bohdan,
I’m interested in forming a Team
my kaggle user is bruno16
Rgds
Bruno

I’m struggling with figuring out how to use the language model I trained to make predictions on multiple labels. I’m having two problems I’ve spent a couple days on.

  1. Creating dataset splits that feed in multiple labels to torchtext. I created a custom dataset that takes in dataframes and creates a different field for each label (similar to this post: Creating a ModelData object without torchtext splits?). Is this on the right track? Or should I be feeding in a list of six numbers directly to the label field for each example? Am I on the right track? I’d post code, but I’m not sure if that’s allowed because this is a Kaggle competition.

  2. Modifying the model decoder to output 6 predictions instead of one. As per this thread (Question on labeling text for sentiment analysis) I modified PoolingLinearClassifier to output the sigmoid of a 6 output units. Is this on the right track? I’m still not sure how the model will know what type of loss to use or which of the fields from the splits will be treated as labels.

Anyway, any help on this would be much appreciated! Is this way simpler than I’m making it? I feel like I’m missing something here!

1 Like

How do you call py file where you keep DL stuff?

You are on the right track!! Keep going :slight_smile:

1 Like

Very simple example of words polarity analysis based on Logit Regression coefficients.:

https://www.kaggle.com/sermakarevich/words-polarity-based-on-lr-weights

5 Likes

Here are attempts by classmates to load up the dataset with multiple labels (towards the bottom of the thread) if you find it helpful.

1 Like

Thank you so much for the help. I’m going to check out that discussion!

I am training a Bidirectional LSTM with pretrained GLOVE embedding using Crestle GPU. It is taking 1 hour to train per epoch. Is it normal ?
When I had trained CNN with pretrained GLOVE embedding it took only 1 minute per epoch.

CuDNNLSTM 1 epoch takes 2-3 minutes to run on GTX 1080 Ti with 300-x embeddings.

How much time did it train to train CNN ? Was it significantly less ?