Yeah, each tree samples its own training set with replacement, so tree A looks at one 63%-ish subset, tree B looks at some other 63%ish subset, and each tree can see a single training example multiple times. I’m new to this stuff and I still find that to be a really interesting idea that I don’t totally get (When does it make sense to use bootstrapping?).
But for random forests, I think the idea is that it’s good if each individual tree trains on a somewhat different training set from all the other trees, since that way the forest as a whole will be less biased. (Feel like I’m handwaving though.)