Why our rmse gives us lower score in comparison with model.score

I start ml1 course on fastai and start practicing and i notice whenever we call


:point_up: This return us list in which we have rmse on training data then validation and then m.socre on train then validation (list look like)

`[0.09049375960085775, 0.2518132221763501, 0.9828851897252822, 0.8867586509526179]`

As we can see the first 2 values (which is calculated with our defined rmse) are much lower then next two So here is the Question why this values are so worse ?

The print_score(m) function returns the results as following: -

  • 1st value : RMSE for the training set
  • 2nd value: RMSE for the validation set
  • 3rd value: Accuracy for the training set
  • 4th value: Accuracy for the validation set
  • 5th value: OOB score if requested in RF model

For RMSE the lower are the values the better is the model, for the accuracy it is vice verse, and the difference between the training and validation score indicates that the model is overfit, you will be working to get lower RMSE values for the validation set and higher accuracy, at the same time minimizing the overfitting.

1 Like

Thank you Mostafa Mohamed ,I really appreciate your time.

1 Like