IndexError: Target is out of bounds on Tabular DLS

Hey everyone!

First time poster, hoping you can help me.

Finished Lesson 05 and wanted to try running my Pokémon Dataset that I have for a project.

I did a very basic loading of my csv through google collab, ran the df through the tabular data loader with a Category block and I am able to define
learn = tabular_learner(dls,metrics=accuracy, layers=[10,10])

However when I either try to find a learning rate or even fit, I get IndexError: Target (whatever index at the time of the error) is out of bounds.

I have checked my dataset and I can’t for the life of me see anything wrong.

Googling hasn’t been very fruitful,
What am I doing wrong?

Here’s the Callback:
0.00% [0/15 00:00<?]

0.00% [0/7 00:00<?]

IndexError Traceback (most recent call last)

in <cell line: 1>() ----> 1 learn.lr_find(suggest_funcs=(slide, valley))

20 frames

/usr/local/lib/python3.10/dist-packages/torch/nn/ in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 3057 if size_average is not None or reduce is not None: 3058 reduction = _Reduction.legacy_get_string(size_average, reduce) → 3059 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 3060 3061

IndexError: Target 515 is out of bounds.

If it also helps, please find the Notebook:

Let me know if I miss anything, greatly appreciate the help

Can you reshare the notebook link? It goes to a kaggle page and not colab.

Sure thing:

Hopefully this one works now

Almost! It just needs to be unrestricted:

Very sorry about that,

I have removed restrictions now i believe,

Thank you

Thanks! Can you also share the CSV data that you are using? You could post it to Google Drive and then share the link or upload it as a public gist.

I found this GitHub issue with a potential solution that might be worth looking into—what could be happening is that your validation and training splits do not contain the same categories. Would need to see the data in more detail to troubleshoot.

Here you go :Copy of pokemon_battles.csv - Google Drive

Appreciate your help

Thanks for the data file Mike. I’ll be able to take a look it this and try to troubleshoot it this weekend.

@MikeTR I think the issue is that your training and validation sets contain different values for the “Winner” column (which is why I think the error is being thrown during the loss calculation—the Learner is trying to index Winners in the validation set that aren’t there. Here’s some code I ran to come to this conclusion:

Note the line of code:

set([480,512,515, 542,567]).issubset(winners_not_in_valid)

Is showing that the indexes listed in the errors when I tried to run learn.lr_find, and so on are in the training set but not in the validation set.

To resolve this issue you would have to create a validation set that contained the same unique Winners as the training set. However, I think that’s not possible with your dataset because the Winner column has 601 values and all of them are unique:


So I think you’ll have to adjust the Winner column.

I’m also a bit unclear about the dataset in general—the “Winner” column seems completely different from either value in the PokemonA and PokemonB columns—shouldn’t the Winner column contain one of those two columns’ value? I’m not a Pokemon expert so perhaps there’s something going on that I just don’t understand.

Thanks for taking the time to take a look at this.

I also noticed this on the show batch, however this should not be the case, on the dataset, the winner column has value from either column PokemonA or PokemonB, however once passed through the dls, it jumbles the winner column and we even see NaNs in there whichis super weird.

I’ll continue to look this up why this could be happening but, do you have any idea why this could be happening?

I think solving this one might fix the rest

Oh my god I solved it!

Foolish me wasn’t adding Winner as a categorical column, no wonder it was never matching!

I am so happy, thank you so much for your help throughout this.

No nevermind, if I do this it just multiplies Winner column, this isn’t a solution

Yeah I’m not able to figure out why it’s shuffling the columns like that. Here’s a reformatted CSV of a subset of your data that I created just for a sanity check and dls.show_batch works as expected:

I’m also able to train it:

You might want to consider reformatting your data in this way if you are unable to get your current data format to work for training.

Yeah I also tried creating a column but was if A won was 0,if b 1 however I was having a very inconsistent loss, yours is very consistently dropping.

You mind if I adapt to be similar to yours?

Also you think by having a numerical column as dependant, which essentially made it a binary classification and I wanted a categorical one, that mightve affected my loss?

Thanks again

Yes of course feel free to use the format I shared. I don’t think our dependent variables are very different, since under the hood, fastai will convert my Result column values of “Wins” and “Loses” to 1s and 0s. I’m not sure why your format is getting an inconsistent loss. Would be interesting to run experiments on both formats and see how they differ and why.