Bugs while implementing MNIST 0-9 from scratch

For each corresponding tensor in the train_set I have used the train_y labels as [0,1,2,3,4,5,6,7,8,9]. I am confused as to how the loss function should be written in this case, I read about the categorical_cross_entropy loss function and I know that works, but I am unable to write my batch accuracy function after that.

In case of a binary classification it made sense. For 0-9, I added a stable softmax function, a categorical_cross_entropy function, but now I don’t know how my batch_accuracy function should be like. Help appreciated.

Another alternative approach which is provided here where a one hot encoding label has been added, but what if i do not want to go down that road?

For batch_accuracy, you want to use the function torch.argmax which will return the index of the largest prediction (assumes that is what your model is predicting)

You could try and see how I implemented the batch_accuracy in this notebook:

1 Like

thanks @jimmiemunyi, argmax is definitely a helpful tool to keep up my sleeve. I got an increasing accuracy on my model the first time. I do not know if this is a stupid mistake, but on restarting the kernel suddenly my accuracy drops and stays constant. I tried running your exact code on my colab and I got the same error. Do not know if I should spend more time debugging this or move ahead to the next chapter.

1 Like

Hello

What part exactly do you get the constant dropping error? What were you trying to implement. I have just ran my notebook again and the model is training with the loss decreasing (although the accuracy gets constant after some time on 96% which is not a bad model)

Personally, I’d suggest you move on for now then revisit the problem again when you have a fresher mindset. You can continue doing the lessons during the week then set aside the weekend to try and debug the problem. Also by continuing the lessons, you will get more knowledge (cross entropy and argmax is covered in the next lesson) and thus more understanding of the problem

1 Like