Degree of confidence of random forest prediction

Random forest uses decision trees at its core and the idea behind bagging and ensembling multiple overfitted trees is that as long as every tree has uncorrelated errors due to overfitting ,the random forest should be more accurate than a single perfectly fitted tree.

But when talking about degree of confidence @jeremy mentions that higher the standard deviation between individual estimators indicates low degree of confidence and vice versa.

My interpretation is that higher standard deviation between prediction of individual estimators indicate that they have uncorrelated errors so the model is more robust and hence it’s better.

So shouldn’t standard deviation of predictions be a measure error correlation of individual estimators instead of the degree of confidence of the random forest?

I think the important distinction here is between individual example vs. overall model. The confidence as measured by standard deviation between estimators applies to a specific example (data point). Let’s say I have a model that predicts a price, and for product A the price is predicted with standard deviation 10, for product B with standard deviation 100. So my model is more confident in the price prediction for product A vs. product B.

Ok so if we take std for observations and find the mean. If this mean of standard deviation is high does that mean that the model is underfitting?
e.g. : In a regression problem if the random forest gives predictions with high std, but their average is accurate for multiple data points or over a validation split. Can we interpret that as meaning that each tree in the forest is overfitted and has random uncorrelated error rather than the random forest predictions have low degree of confidence of prediction.