Kaggle: ieee fraud detection open kernel with Fast.AI

I’ve created an open Kaggle kernel for the IEEE fraud detection competition which is currently ongoing.

https://www.kaggle.com/burgalon/ieee-fraud-detection-fast-ai-deep-learning-nn

Maybe someone will find this code useful as a starting point

Most XGB models which are available as open kernels, or reported in the CV/LB post - show a higher or similar LB to CV/Validation, while in my kernel the LB score (~0.88) is significantly lower (validation ~0.92). Any ideas what might be causing this? Also in a similar manner, the model is unable to reach 0.97 validation AUC as in other kernels while using the same features.

Just a quick guess from 3 mins of looking at code and past experience.

Many of their models start out using K-fold cross-validation, which helps with the score.

I faced the same thing and ended up with LB score of 0.87 though I invested quite some time (and you get this score almost right away with the fastai defaults). Looks like Tabular can’t keep up with XGB here (which is quite contrary to what Jeremy says, isn’t it?).

Observation: tweaking the hyperparamters didn’t change too much, even changing learning rate in orders of magnitude, e.g. from 0.1 to 1. On positive side, it’s astonishing how stable this is. On the negative side quite frustrating because experimentating around doesn’t bring you forward much…

Feature engineering can make a difference. Are the hyperparameters between XGB/tree based models and DL model same?

I did all the published ‘best practice’ feature engineering in the competition. So the data base should be at least comparable to what the trees worked on.

The tabular neural nets and XGB have totally different hyperparameters, don’t they?

I think looking at Santender winner NN model may help please see link for code and for some explanation check this It may give some ideas for future improvements.