What type of normalization does TabularTransform apply to continuous data?

What type of normalization does TabularTransform apply to continuous data? I can’t find the answer in the docs.

https://docs.fast.ai/tabular.transform.html#Transforms-for-tabular-data

Here it shows that each continuous variable gets subtracted by its mean and then divided by its standard deviation.

But I was wondering if this is not unnecessary if we do a batchnorm operation in any case, since a batchnorm is a similar scaling but with learnable parameters (if I understand it correctly). Anyone care to share some insight into this?

1 Like