I'm using scipy.interpolate.interp2d
to create an interpolation function for a surface. I then have two arrays of real data that I want to calculate interpolated points for. If I pass the two arrays to the interp2d
function I get an array of all the points, not just the pairs of points.
My solution to this is to zip the two arrays into a list of coordinate pairs and pass this to the interpolation function in a loop:
f_interp = interpolate.interp2d(X_table, Y_table,Z_table, kind='cubic')
co_ords = zip(X,Y)
out = []
for i in range(len(co_ords)):
X = co_ords[i][0]
Y = co_ords[i][1]
value = f_interp(X,Y)
out.append(float(value))
My question is, is there a better (more elegant, Pythonic?) way of achieving the same result?
For one, you can do
in your loop. Or even better, just
replacing the loop.
On a different note, I suggest using
interpolate.griddata
instead. It tends to behave much better thaninterp2d
, and it accepts arbitrary-shaped points as input. As you've seen,interp2d
interpolators will only return you values on a mesh.Passing all of your points at once will probably be quite a lot faster than looping over them in Python. You could use
scipy.interpolate.griddata
:or one of the
scipy.interpolate.BivariateSpline
classes, e.g.SmoothBivariateSpline
:CloughTocher2DInterpolator
also works in a similar fashion, but without thegrid=False
parameter (it always returns a 1D output).Inspired by this thread where someone recommends using the internal weights of the interp2d function, I've created the following wrapper which has exactly the same interface as
interp2d
but the interpolant evaluate pairs of inputs and return a numpy array of the same shape of its inputs. The performances should be better thanfor
loops or list comprehension, but when evaluated on a grid it will be slightly outperformed by the scipyinterp2d
.Try *args and tuple packing/unpacking
or just
or just
as a side note, the "magic star" allows zip to be its own inverse!