The story after test-set not performing good

You keep test-data in a bunker to check if you model actually learned something by running your model on test data in the last. Good no problem till now, you can submit this model to Kaggle(if it is for kaggle competition).
But what if you sent this model to client and they say that it is not performing good, that means you somehow (indirectly you got a feedback) saw the hidden test data (at customer site). Now if you do this back and forth with customer many time, you end up overfitting to that hidden data at the customer site.

Having a secure test-data can work for Kaggle competition, because it is only time that you send you model for evaluation, but how can it work for customer?

As THE BOOK said [ It cannot be used to improve the model; it can be used only to evaluate the model at the very end of our efforts].
My question is what to do next if we still didn’t get the desired accuracy?