Another treat! Early access to Intro To Machine Learning videos

Hello asutosh97

About the oob score I have a question:

if i understood what jeremy said (english is not my native language) the oob score allows you not to need a validation set to see how well the model works. For this reason it is also useful when we have little data.

my question is why in notebooks jeremy uses:

m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)

If i use the oob_score shouldn’t I use m.fit with the complete data and not only with the x_train and the y_train?

This is looking great! Thanks for sharing :slight_smile:

1 Like


In the 2nd lesson there intruduction to max_features of random forest. For me it looks similar to dropout of neural network from deep learning, is it correct intuition?

@Kasianenko
Yes, absolutely. Connecting the right dots.

2 Likes

Hello @fumpen,

as Jeremy says in one of his lectures, we can’t use any of the test data for calibration. Think of it as you don’t have it until you’ve trained your model completely. Else, you can’t get true results.

1 Like

Hello Everyone,

In Lecture 2, @jeremy explains how a decision tree is formed by selecting a variable and a split-point, at each step, which yields the lowest MSE (as per the naive model). Can someone please explain why exactly is this the splitting methodology? From another source, decision tree splitting is done using the ‘Information Gain’. How are these two (MSE and Information Gain) connected?

hello @vahuja4, think it this way

Information Gain = MSE at Root Node - Avg. MSE of the childs after splitting

So, IG will be more when Avg. MSE drops the most. Both are basically indicating the same thing only.

2 Likes

I see. But why is this termed to be ‘Information Gain’? Also, would you know why is this the chosen methodology for splitting?

@vahuja4

  1. I think it is termed like that by convention because the more close your predictions come to actual values, you seem to have gained more information. And MSE basically denotes the gap between the actual values and model prediction. So, the closer the gap becomes(i.e. the more the MSE drops), it can be thought of as more information is gained.

  2. As you know in DecisionTreeRegressor, the pred`iction at a node is given by taking the average of all the data points belonging to it. So, our ultimate goal is to make this average as close to the actual value.
    So, we basically do a brute-force search of all possible splitting and check which one will give average closest to the actual values, and use MSE as a metric to measure the closeness.

I hope these answer your questions.

@asutosh97, thank you! Makes sense.

Hey guys, if you want even a deeper understanding of your tree based models/xgb/sklearn etc, check this cool repos out,

What are your thoughts on this Jeremy?

(It’s really nice to interpret the Black Boxes properly…)

Both look Promising

1 Like

looks interesting, where can I find documentations of using this?

Probably the notebooks are there…

The plots looks amazing…

https://nbviewer.jupyter.org/github/slundberg/shap/tree/master/notebooks/

2 Likes

I’m trying out techniques learnt in lessons 1 and 2 on the house prices kaggle competition


the training set has 1460 rows, should i still split it into 2 to get a separate validation set or should I just rely on oob_score?

I submitted my predictions to kaggle.
On my validation set, I had a root mean squared error of 0.0486755 but on kaggle my error was 0.14651 placing me around 2407 on the leaderboard.
Model is at


would be glad if you could have a look at it and help me improve my score.

I just wish that you start international fellowship program for this course too and make it possible for international students to make the most out of it and also the part 2 of ML course. The first seven videos of ML1 is a gold mine for tree based models.

1 Like

You are over-fitting. Also it looks like your validation set or oob_score isn’t representative of the kaggle testing set. They way Jeremy recommends fixing this is try out 5 or so different validation sets on different ‘goodnesses’ of models and submit the results to kaggle. Plot the score on your different validation against your score on testing to compare the relationship. What your’re looking for is a roughly straight line that indicates improved performance on your testing set as your score improves on your validation set.

Some other ways to reduce over fitting is to increase your min_samples_leaf parameter to a higher number. Also reduce the max features per tree parameter to increase the diversity of the trees you’re creating.

Hope this helps!

Regarding the discussion here https://youtu.be/3jl2h9hSRvc?t=635 on why OOB score in random forest would be lower than validation score, I understand Jeremy’s point, but I wonder if this is only true under the assumption that there is no/little overfitting on the training data? Overfitting on training data seems possible to me since out-of-bag data have been seen by at least some trees, while the validation set is entirely unseen by any trees.
Thanks for any help.

(Edit: I now see here https://youtu.be/3jl2h9hSRvc?t=1208 that Jeremy does mention that OOB could be better than validation score. But not sure if that is the same scenario as I mentioned above.)

After doing the feature engineering in my training set. How do we bring the same changes in the test set like one hot encoding the columns or Parsing the dates from “SaleDate” to “Is_month_end”, "sale_month " and other changes we have brought to the training set. Should i merge the two set in the starting and then perform the feature engineering and after split them again when training ?
Or is there any other good way of doing it ?

Is this the machine learning course referred to in Lecture 1, Part 1?