I want to compute eigenvectors for an array of data (in my actual case, i cloud of polygons)
To do so i wrote this function:
import numpy as np
def eigen(data):
eigenvectors = []
eigenvalues = []
for d in data:
# compute covariance for each triangle
cov = np.cov(d, ddof=0, rowvar=False)
# compute eigen vectors
vals, vecs = np.linalg.eig(cov)
eigenvalues.append(vals)
eigenvectors.append(vecs)
return np.array(eigenvalues), np.array(eigenvectors)
Running this on some test data:
import cProfile
triangles = np.random.random((10**4,3,3,)) # 10k 3D triangles
cProfile.run('eigen(triangles)') # 550005 function calls in 0.933 seconds
Works fine but it gets very slow because of the iteration loop. Is there a faster way to compute the data I need without iterating over the array? And if not can anyone suggest ways to speed it up?
Hack It!
Well I hacked into
covariance func definition
and put in the stated input states :ddof=0, rowvar=False
and as it turns out, everything reduces to just three lines -To extend it to our 3D array case, I wrote down the loopy version with these three lines being iterated for the 2D arrays sections from the 3D input array, like so -
Boost-it-up
We are needed to speedup the last three lines there. Computation of
X
across all iterations could be done withbroadcasting
-Next up, the dot-product calculation for all iterations could be done with
transpose
andnp.dot
, but thattranspose
could be a costly affair for such a multi-dimensional array. A better alternative exists innp.einsum
, like so -Use it!
To sum up :
Could be pre-computed like so :
These pre-computed values could be used across iterations to compute eigen vectors like so -
Test It!
Here are some runtime tests to assess the effect of pre-computing covariance results -
I don't know how much of a speed up you can actually achieve.
Here is a slight modification that can help a little: