I built a minimal classifier with the tabular module to tackle this competition. It trains and validates with somewhat reasonable accuracy and there are a mixture of the classes [0,1] in the validation predictions as expected. But as I inspected the predictions of the TEST data there are no 1:s only zeros…
I looked into the source of get_preds but had a hard time grasping it and could not find a way to proceed debugging. I would highly appreciate any pointers on my issue… or suggestions what to read up on. Below is my code and details.
You are looking at the labels for the validation and test sets, not the predictions. The predictions are at index zero. Labels for a Test set are always zero.
Yes, so if I look at index 0 instead I will have all the probabilities. Looks something like [[0.9, 0.1],[0.95, 0.05] ... ] which all translates to the class 0 as seen in the second index (?).