Question on data preparation for something like the Titanic dataset

When working with something like the Titanic dataset (see here), is it best to normalize every field in the dataset prior to training?

If so, do we need to store the mean and std for each field for future use when we want to do predictions?

It seems that the answer is yes and that we’ll have to do something similar to what we do with VGG where we subtract the mean of each channel as part of the data preparation there.

@wgpubs check out some of the more popular kernels on the titanic dataset, a few of them actually talk about that.