I have a very small dataset (40 training examples, 10 validation examples, 120 classes) that I’m getting very high accuracies with a very simple model (batchnorm, flatten, and dense layers only).
My training accuracy is 94-95% and validation accuracy is 76-78%. I know it’s overfitting and I have tried all 5 of Jeremy’s techniques. The data is of a very specific type (not images), so I cannot add data or do data augmentation. I’m not even using convolutional layers and I’m overfitting, which is why I currently am using the layer types I mentioned. I’m using two dropout layers with 0.6 levels, and the architecture is very simple. I can paste the model if anyone likes.
My question is: Is there ever a situation where validation accuracy cannot be as high as the training accuracy? Is there a limitation based on the size of the dataset? Or is it ALWAYS possible for validation accuracies to match training accuracies and the network just needs the right parameters?
Thank you so much