While some of the practices have been discussed which revolves around transfer learning (SoTA approaches) to get better results, there is still often room for improvement (especially Kaggle) left for maximizing performance. I have tried lots of ways to improve it, but still it isn’t enough.
Things which usually work:
- Data augmentation (suitable for datasets)
- Custom Loss function (in some cases, but mostly it’s standard ones)
- Adding more parameters (adding layers, more computation);[not helpful in all cases,but usually works]
- Adding more data (larger mini batches, if RAM allows)
- Researching more into architecture
- Finding bottlenecks (often it’s hard and time taking)
- Ensembling (Proven to work really well especially for competitions, More info required though)
I wanted to gather more info as to in which directions (apart from these) should we be researching more into to get even more better results (reaching upto saturation with current research). You can share your experiences either from Kaggle or implementing research papers which have worked out for you earlier.