I’d imagine this would be a common question, but I’m still struggling to find a solution.
When doing PCA, sometimes the lower-dimensional result flips sign on one or multiple dimensions while keeping the same numerical (absolute) value.
I’m aware of SVD’s sign ambiguity issue, but I’m using PCA from
sklearn which includes already a “sign flipping” trick that looks at the largest element in absolute value of
U, and if positive then flip
Here is a short notebook I put together to demonstrate the issue:
What am I missing?