Similarity between ML tricks and DL tricks

Research shows that finding similarity and connections between different domains helps improve one’s understanding. So I’d like to propose that we may list the similarity or connections of tricks/concepts/methods between ML and DL.

I’ll go first:

I think subsampling of random forest is kind of like doing SGD, in the sense that it’s just impractical to look at the whole dataset in one go. So every time we try to make the model better, we consider only a “batch” of the dataset.

And I feel that using max_features=0.5 for random forest is similar to using dropout=0.5 for neural net, in the sense that they all randomly dump some dimensions of the information to avoid overfitting.

So what’s your take? Looking forward to your insights.:laughing::laughing::laughing:

1 Like