Using my own helper script on kaggle kernels

Hello People!

So I’m trying to run some fastai techniques from the present course on kaggle using the old “structured.py” file from Jeremy’s old ML lessons.

I cant figure out how to load structured.py onto that kernel. I’ve read some docs but they seem to be old and don’t work anymore.

Any help would be greatly appreciated!

you need to run and commit it as a utility script. once that’s done you can add it to the kernel you want to use it in as an input.

1 Like

Ah! Got it. Thanks a lot!

1 Like

Just a note… I am not clear on whether or not you can use utility scripts in a “kernels-only” competition. Searching Kaggle found a years-old ambiguous answer, support did not respond to my question, and I wasted several submissions trying to deduce the answer. (Kernel execution continues even after a fatal error.) Very frustrating!

Please post if you figure out (or anyone knows) a definitive answer.

unless i’m missing something, i don’t see why it would matter. it’s basically just a convoluted way of importing code so it’s nothing you couldn’t get around by cutting and pasting it all into your kernel for a comp, right?

i don’t know though, just guessing.

It may matter to the sponsors because they would own your submission notebook but not the utility scripts it uses. That’s the only reason I could think of for such a restriction. But whether the restriction does or does not exist, it’s not properly documented anywhere I could find.

Yes, I ended up pasting all the (once organized) code into a single kernel when I could not determine whether it was failing to load the scripts. That change appeared to eliminate “Submission Scoring Error”. But the jam I got into all the way to the end was a notebook that runs perfectly manually with test data, but fails when submitted, giving no feedback about the reason.

Nasty

There are many reasons why a submission kernel may fail on the private test data:

  • You are using too much memory. It worked fine on public test but with so much more data it used too much memory
  • The kernel takes too long. Check to see how much larger the private test data is, and that’s how much longer it will take to run the kernel
  • The produced submission file is in the wrong format. There may be something going on when you switch from public to private that messed up your submission file
  • The inference is failing on the private data. Maybe due to a shape error for example. Something about the mismatch between the public and private test data is confusing your kernel.

There is a very valid reason that Kaggle doesn’t provide any additional debugging information. If more information is provided, Kagglers can more easily probe the private test set.

Hi @ilovescience. I fully agree with all that you wrote. My best theories for the submission failure is either memory/GPU overflow or an audio file that throws an error on loading. Kaggle does tell you when a submission takes too long.

By the end I was probing the submission not to see the test set but identify the error: put try/except around each likely place, set a flag on error, and then generate a submission file that gives a known score to communicate the error location. It’s like a game of twenty questions where each question costs a submission. I spent the majority of my time struggling with Kaggle rather then doing machine learning. In the end, I never found out how my method might perform on the test set.

Judging from the number of default .577’s on the leaderboard, I’d speculate that many others were in the same situation and did not realize that their submission had crashed instead of running their method. It’s a shame for participants because false scores are discouraging (me included), and for Cornell because one of those failed submissions might have contained the solution they were hoping for.

Yes, if Kaggle returned the error it could be deliberately generated to communicate information about the test set. I do not have a good answer for this issue. But here’s a few ideas that I will eventually send to Kaggle:

  • Kaggle could require the hosts to provide a validation set that at least attempts to reflect the size and complexity of the test set. In this case the example provided was small and incomplete.

  • The above validation would include a scored submission file. There were several questions, such as what to do when a bird call overlaps two time segments, that went unanswered by the sponsors. The competitors could use the working example to understand exactly what is expected for submission and scoring.

  • Kaggle could aggregate the errors thrown by submissions. If many submissions fail because of a specific execution error, it could be hinted back to the competitors or be used to correct the test set. I am thinking, for example, of file read or conversion errors.

Thanks for giving me the chance to unwind from a month of futile hard work. And @vishak, sorry for hijacking your thread! Comments, criticisms, and ideas are welcome. :sweat_smile:

It’s not a kaggle competition if 50% of the LB doesn’t realise their model is 100% useless :slight_smile:

1 Like