Hyper Parameter Tuning - Choosing hidden layers and neurons

Is there a methodology to choose the best number of hidden layers and neurons to obtain best accuracy? If I choose more number of deep layers, will it produce best results?? It may appear question is too broad, but I Just want to hear your experience on this. Thank you!

So I don’t believe there’s a whole lot of theory developed at the moment on how to choose numbers of neurons and hidden layers. In general increasing layers usually offers better performance increase than increasing neurons, but if you make the network too large it will overfit.

Alternatively, you can use the “stretch pants” approach. It’s also mentioned in the book Hands-On Machine Learning with Scikit-Learn and TensorFlow. Use a network with more layers and neurons than you actually need, then keep an eye on the validation curve and use early stopping to prevent overfitting.