Is it correct to analogize gradient descent and SGD to decision trees and random forests?

As in, since gradient descent changes the weights by differentiating over the whole dataset, whereas SGD differentiates over batches, is this analogous to how decision trees threshold features and separate samples in the whole dataset, whereas random forests do this with random subsets? Is this comparison valid?