How to find matrix columns that are "nearly colinear"?

It may be a well-known linear algebra question, but I don’t know the concepts to ask it well!

Suppose I am looking at a weight matrix. Is there a way to find whether and which columns are “nearly colinear”? And more generally, whether a group of columns is “nearly linear combinations” within itself?

My thought is to simplify a model’s size and computation, particularly if the layer is followed by a ReLU non-linearity.

Any leads to the right math topic are appreciated. And whether this idea has already been investigated.

Co-linearity can be found by using dot product of the column vectors which gives the cos of the angle between them.First normalize the column vectors and then take dot product. If the result is nearer to 1 then they are almost co-linear.

whether a group of columns is “nearly linear combinations” within itself?
Question is not clear.

Thanks for responding. Yes, you could test the dot product of all normalized pairs.

What I mean is to find groups of weight columns such that, for example, column1 is-about-equal-to a x column2 + b x column3.

That would mean that for any input to the FC or conv layer, activation1 is redundant because it is always (a x activation2) + (b x activation3).

An efficient way to find these pairs or groups could allow the FC layer to be made smaller, or redundant conv kernels to be removed.

Wikipedia tells me that I’ll need to learn about matrix rank and decomposition.

Make a separate matrix with all the columns for which you need to test linear dependency. Now find the rank of the matrix. If rank is lower than the number of columns then there is a linear dependency and one or more columns are redundant.