Merging validation and train test + semi supervised

I have somewhat of a practical question when attempting competitions and fine tuning final models before deploying them in production. At what point in the development cycle do you merge the train and validation sets back together to get a final model and do you ever do semi supervised learning ( with the provided test data for competitions or small sample of real world test data for real models). Curious to hear your feedback. I’ve found that on some of the competitions I’ve tried merging the valid + train data doesn’t give that much of a boost at all. (typically 15% - 20% of data held back for valid). Thanks!