Decreasing validation and training set loss by removing redundant features :

Hi all
in ML lesson 4,5

{why reducing distentions (features of Data)
i mean feature importance}
{not allowing importance to be distributed(removing redundant features)}
lead our model to be more generalized
and the same for training set{I know that after removing , model is less computational expensive but what about accuracy and loss?}

at that very stage (removing redundant features)
the most important thing is Time in this Data , before dropping SaleId column we had some predictor factor about going forward of time ,
(relation between time and SalePrice)
after dropping why the accuracy of val_set and train and OOB score goes up??