Please let us see your kernels! :D

We’re starting to see some great blog posts now, but I’d love to see more kernels! If you have a kernel that’s getting a few votes, please share with us too. :slight_smile: And if you’ve got a kernel you’d like help to improve, share that too so we can all give our ideas.

2 Likes

Hi! This was my kernel I put up for the Text Normalisation Challenge. Feedback on what I should keep in mind while writing kernels in the future is welcome!
https://www.kaggle.com/neerjad/class-wise-regex-functions-l-b-0-995

1 Like

Hi, this is my kernel to understand transfer learning in iceberg challenge context. Looking forward to great discussion on the kernel.
https://www.kaggle.com/devm2024/transfer-learning-with-vgg-16-cnn-aug-lb-0-1712

1 Like

Hi! I played around with XGBoost modeling and parameters tuning for Porto Seguro competition. Please see my Kernel here: https://www.kaggle.com/mashavasilenko/porto-seguro-xgb-modeling-and-parameters-tuning

Thanks!

1 Like

@mvasilenko What was the LB score that you were getting from this?

And you did undersampling to balance class. I am not sure how much would that matter, but oversampling is recommended more.

@groverpr Hi! Yes, I know that, but I was running the notebook on my local machine and oversampling was taking too much time, so I decided to do undersampling first. I did oversampling for RF but wouldn’t say it gave me as much advantage as parameters tuning.

1 Like

It’s an old one and couldn’t gather much votes. But here is what I tried. It could be useful for someone who needs a head-start for Statoil competition.

https://www.kaggle.com/grroverpr/eda-and-cnn-resnet-18-lb-0-2094

2 Likes

Here is a kaggle I wrote for the text normalization competition. It was greatly inspired by two other kernals, one of them being Neerja’s.
Any feed back would be great!
https://www.kaggle.com/savannahvi/3-simple-steps-lb-9878-with-new-data

3 Likes

Sharing a basic kernel on data cleaning and merging multiple tables for the new Kaggle competition:
https://www.kaggle.com/shikhar1/combining-tables-and-data-pre-processing?scriptVersionId=1831581

1 Like

Hi, this is my kernel on Text Normalization. I am hoping to build these functions into packages. Any ideas or suggestions regarding the same will be really great. Thanks!

https://www.kaggle.com/alvira12/class-wise-processing-lb-0-992-new-dataset

1 Like

Here’s an EDA kernel I made for the Kaggle survey data. It wasn’t for a competition, but was good practice pulling out a story from the data and communicating insights. https://www.kaggle.com/smcnish71/what-should-job-seekers-do-to-get-a-job

5 Likes

This is terrific - I really like the way you’ve told a story here, and also you’ve presented it nicely. Good use of the pipe operator in R too.

Your observation that job seekers think Matlab is more important that those actually in jobs was cute… :wink:

Here is my kernel: an initial EDA for the mercari competition.

https://www.kaggle.com/vrtjso/mercari-eda-more-info-than-you-can-imagine

2 Likes

Very cool!

Here is my kernel for Quora Question Pairs. I am currently using Logistic Regression, and will try Random Forest later.
https://www.kaggle.com/chengchengx/logistic-regression-with-term-document-matrix

1 Like

WTF is this face doing in my twitter feed?!? :wink:

5 Likes

If I knew this would’'ve happen, I wouldn’t put my passport photo :smiley:

Thanks for sharing ! Just without knowing this I’ve luckily finished and added adagrad, rmsprop and adam

For most of the kaggle competitions, I have been using some variant of gradient boosting algorithm without fully understanding it. (just knowing how to tune, run the models and take predictions works too :D) .

So to gain better understanding, I tried to write the code from scratch (on top of DecisionTree from rf) and also wrote a blogpost (draft and not public yet) explaining what I understood. It could be useful to someone who is looking for understanding the same. Those who know it already, please correct me if I am wrong somewhere.

Kaggle kernel with code - https://www.kaggle.com/grroverpr/gradient-boosting-simplified/

1 Like

Very cool! I like your approach and it’s really cool that you have an implementation behind it. I do think there’s a lot you can do to improve the writing and explanation - be sure to have a strong writer go through it carefully with you to help with the prose, and think about how to use your code to help with your explanation.