Interpolate whole arrays of complex numbers

2020-03-30 03:53发布

问题:

I have a number of 2-dimensional np.arrays (all of equal size) containing complex numbers. Each of them belongs to one position in a 4-dimensional space. Those positions are sparse and distributed irregularly (a latin hypercube to be precise). I would like to interpolate this data to other points in the same 4-dimensional space.

I can successfully do this for simple numbers, using either sklearn.kriging(), scipy.interpolate.Rbf() (or others):

# arrayof co-ordinates: 2 4D sets
X = np.array([[1.0, 0.0, 0.0, 0.0],\
              [0.0, 1.0, 0.0, 0.0]])

# two numbers, one for each of the points above 
Y = np.array([1,\
              0])

# define the type of gaussian process I want
kriging = gp.GaussianProcess(theta0=1e-2, thetaL=1e-4, thetaU=4.0,\
            corr='linear', normalize=True, nugget=0.00001, optimizer='fmin_cobyla')

# train the model on the data
kmodel = kriging.fit(X,Y)

# interpolate
kmodel.predict(np.array([0.5, 0.5, 0.0, 0.0]))
# returns: array([ 0.5])

If I try to use arrays (or just complex numbers) as data, this doesn't work:

# two arrays of complex numbers, instead of the numbers 
Y = np.array([[1+1j, -1-1j],\
              [0+0j,  0+0j]])

kmodel = kriging.fit(X,Y)
# returns: ValueError: The number of features in X (X.shape[1] = 1) should match the sample size used for fit() which is 4.

This is obvious since the docstring for kriging.fit() clearly states that it needs an array of n scalars, one per each element in the first dimension of X.

One solution is to decompose the arrays in Y into individual numbers, those into real and imaginary parts, make a separate interpolation of each of those and then put them together again. I can do this with the right combination of loops and some artistry but it would be nice if there was a method (e.g. in scipy.interpolate) that could handle an entire np.array instead of scalar values.

I'm not fixed on a specific algorithm (yet), so I'd be happy to know about any that can use arrays of complex numbers as the "variable" to be interpolated. Since -- as I said -- there are few and irregular points in space (and no grid to interpolate on), simple linear interpolation won't do, of course.

回答1:

There are two ways of looking at complex numbers:

  1. Cartesian Form ( a + bi ) and
  2. Polar/Euler Form ( A * exp(i * phi) )

When you say you want to interpolate between two polar coordinates, do you want to interpolate with respect to the real/imaginary components (1), or with respect to the number's magnitude and phase (2)?

You CAN break things down into real and imaginary components,

X = 2 * 5j
X_real = np.real(X)
X_imag = np.imag(X)

# Interpolate the X_real and X_imag

# Reconstruct X
X2 = X_real + 1j * X_imag

However, With real-life applications that involve complex numbers, such as digital filter design, you quite often want to work with numbers in Polar/exponential form.

Therefore instead of interpolating the np.real() and np.imag() components, you may want to break it down into magnitude & phase using np.abs() and Angle or Arctan2, and interpolate separately. You might do this, for example, when trying to interpolate the Fourier Transform of a digital filter.

Y = 1+2j
mag = np.abs(Y)
phase = np.angle(Y)

The interpolated values can be converted back into complex (Cartesian) numbers using the Eulers formula

# Complex number
y = mag * np.exp( 1j * phase)

# Or if you want the real and imaginary complex components separately,
realPart, imagPart = mag * np.cos(phase) , mag * np.sin(phase)

Depending on what you're doing, this gives you some real flexibility with the interpolation methods you use.



回答2:

I ended up working around the problem, but after learning a good deal more about response surfaces and the like, I now understand that this is a far-from-trivial problem. I could not have expected a simple solution in numpy, and the question would have probably been better placed in a forum on mathematics than on programming.

If I had to tackle such a task again, I'd probably use scikit-learn to try and establish either a co-Kriging interpolation for both components, or two separate Kriging (or more general, Gaussian Process) models which share a common set of model constants, optimized to minimize the combined error amplitude, (i.e.: Full model error square is the sum of both partial model errors)

-- but first I'd go and have a look if there aren't any useful papers on the topic already.