As we can see the first 2 values (which is calculated with our defined rmse) are much lower then next two So here is the Question why this values are so worse ?
The print_score(m) function returns the results as following: -
1st value : RMSE for the training set
2nd value: RMSE for the validation set
3rd value: Accuracy for the training set
4th value: Accuracy for the validation set
5th value: OOB score if requested in RF model
For RMSE the lower are the values the better is the model, for the accuracy it is vice verse, and the difference between the training and validation score indicates that the model is overfit, you will be working to get lower RMSE values for the validation set and higher accuracy, at the same time minimizing the overfitting.