可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have four 2d points, p0 = (x0,y0), p1 = (x1,y1), etc. that form a quadrilateral. In my case, the quad is not rectangular, but it should at least be convex.
p2 --- p3
| |
t | p |
| |
p0 --- p1
s
I'm using bilinear interpolation. S and T are within [0..1] and the interpolated point is given by:
bilerp(s,t) = t*(s*p3+(1-s)*p2) + (1-t)*(s*p1+(1-s)*p0)
Here's the problem.. I have a 2d point p that I know is inside the quad. I want to find the s,t that will give me that point when using bilinear interpolation.
Is there a simple formula to reverse the bilinear interpolation?
Thanks for the solutions. I posted my implementation of Naaff's solution as a wiki.
回答1:
I think it's easiest to think of your problem as an intersection problem: what is the parameter location (s,t) where the point p intersects the arbitrary 2D bilinear surface defined by p0, p1, p2 and p3.
The approach I'll take to solving this problem is similar to tspauld's suggestion.
Start with two equations in terms of x and y:
x = (1-s)*( (1-t)*x0 + t*x2 ) + s*( (1-t)*x1 + t*x3 )
y = (1-s)*( (1-t)*y0 + t*y2 ) + s*( (1-t)*y1 + t*y3 )
Solving for t:
t = ( (1-s)*(x0-x) + s*(x1-x) ) / ( (1-s)*(x0-x2) + s*(x1-x3) )
t = ( (1-s)*(y0-y) + s*(y1-y) ) / ( (1-s)*(y0-y2) + s*(y1-y3) )
We can now set these two equations equal to each other to eliminate t. Moving everything to the left-hand side and simplifying we get an equation of the form:
A*(1-s)^2 + B*2s(1-s) + C*s^2 = 0
Where:
A = (p0-p) X (p0-p2)
B = ( (p0-p) X (p1-p3) + (p1-p) X (p0-p2) ) / 2
C = (p1-p) X (p1-p3)
Note that I've used the operator X to denote the 2D cross product (e.g., p0 X p1 = x0*y1 - y0*x1). I've formatted this equation as a quadratic Bernstein polynomial as this makes things more elegant and is more numerically stable. The solutions to s are the roots of this equation. We can find the roots using the quadratic formula for Bernstein polynomials:
s = ( (A-B) +- sqrt(B^2 - A*C) ) / ( A - 2*B + C )
The quadratic formula gives two answers due to the +-. If you're only interested in solutions where p lies within the bilinear surface then you can discard any answer where s is not between 0 and 1. To find t, simply substitute s back into one of the two equations above where we solved for t in terms of s.
I should point out one important special case. If the denominator A - 2*B + C = 0 then your quadratic polynomial is actually linear. In this case, you must use a much simpler equation to find s:
s = A / (A-C)
This will give you exactly one solution, unless A-C = 0. If A = C then you have two cases: A=C=0 means all values for s contain p, otherwise no values for s contain p.
回答2:
One could use Newton's method to iteratively solve the following nonlinear system of equations:
p = p0*(1-s)*(1-t) + p1*s*(1-t) + p2*s*t + p3*(1-s)*t.
Note that there are two equations (equality in both the x and y components of the equation), and two unknowns (s and t).
To apply Newton's method, we need the residual r, which is:
r = p - (p0*(1-s)*(1-t) + p1*s*(1-t) + p2*s*t + p3*(1-s)*t),
and the Jacobian matrix J, which is found by differentiating the residual. For our problem the Jacobian is:
J(:,1) = dr/ds = -p0*(1-t) + p1*(1-t) + p2*t - p3*t (first column of matrix)
J(:,2) = dr/dt = -p0*(1-s) - p1*s + p2*s + p3*(1-s) (second column).
To use Newton's method, one starts with an initial guess (s,t), and then performs the following iteration a handful of times:
(s,t) = (s,t) - J^-1 r,
recalculating J and r each iteration with the new values of s and t. At each iteration the dominant cost is in applying the inverse of the Jacobian to the residual (J^-1 r), by solving a 2x2 linear system with J as the coefficient matrix and r as the right hand side.
Intuition for the method:
Intuitively, if the quadrilateral were a parallelogram, it would be much easier to solve the problem. Newton's method is solving the quadrilateral problem with successive parallelogram approximations. At each iteration we
Use local derivative information at the point (s,t) to approximate the quadrilateral with a parallelogram.
Find the correct (s,t) values under the parallelogram approximation, by solving a linear system.
Jump to this new point and repeat.
Advantages of the method:
As is expected for Newton-type methods, the convergence is extremely fast. As the iterations proceed, not only does the method get closer and closer to the true point, but the local parallelogram approximations become more accurate as well, so the rate of convergence itself increases (in the iterative methods jargon, we say that Newton's method is quadratically convergent). In practice this means that the number of correct digits approximately doubles every iteration.
Here's a representative table of iterations vs. error on a random trial I did (see below for code):
Iteration Error
1 0.0610
2 9.8914e-04
3 2.6872e-07
4 1.9810e-14
5 5.5511e-17 (machine epsilon)
After two iterations the error is small enough to be effectively unnoticable and good enough for most practical purposes, and after 5 iterations the result is accurate to the limits of machine precision.
If you fix the number of iterations (say, 3 iterations for most practical applications and 8 iterations if you need very high accuracy), then the algorithm has a very simple and straightforward logic to it, with a structure that lends itself well to high-performance computation. There is no need for checking all sorts of special edge cases*, and using different logic depending on the results. It is just a for loop containing a few simple formulas. Below I highlight several advantages of this approach over the traditional formula based approaches found in other answers here and around the internet:
Easy to code. Just make a for loop and type in a few formulas.
No conditionals or branching (if/then), which typically allows for much better pipelining efficiency.
No square roots, and only 1 division required per iteration (if written well). All the other operations are simple additions, subtractions, and multiplications. Square roots and divisions are typically several times slower than additions/multiplications/multiplications, and can screw up cache efficiency on certain architectures (most notably on certain embedded systems). Indeed, if you look under the hood at how square roots and divisions are actually calculated by modern programming languages, they both use variants of Newton's method, sometimes in hardware and sometimes in software depending on the architecture.
Can be easily vectorized to work on arrays with huge number of quadrilaterals at once. See my vectorized code below for an example of how to do this.
Extends to arbitrary dimensions. The algorithm extends in a straightforward way to inverse multilinear interpolation in any number of dimensions (2d, 3d, 4d, ...). I have included a 3D version below, and one can imagine writing a simple recursive version, or using automatic differentiation libraries to go to n-dimensions. Newton's method typically exhibits dimension-independent convergence rates, so in principle the method should be scalable to a few thousand dimensions (!) on current hardware (after which point constructing and solving of the n-by-n matrix J will probably be the limiting factor).
Of course, most of these things also depends on the hardware, compiler, and a host of other factors, so your mileage may vary.
Code:
Anyways, here's my Matlab code: (I release everything here to the public domain)
Basic 2D version:
function q = bilinearInverse(p,p1,p2,p3,p4,iter)
%Computes the inverse of the bilinear map from [0,1]^2 to the convex
% quadrilateral defined by the ordered points p1 -> p2 -> p3 -> p4 -> p1.
%Uses Newton's method. Inputs must be column vectors.
q = [0.5; 0.5]; %initial guess
for k=1:iter
s = q(1);
t = q(2);
r = p1*(1-s)*(1-t) + p2*s*(1-t) + p3*s*t + p4*(1-s)*t - p;%residual
Js = -p1*(1-t) + p2*(1-t) + p3*t - p4*t; %dr/ds
Jt = -p1*(1-s) - p2*s + p3*s + p4*(1-s); %dr/dt
J = [Js,Jt];
q = q - J\r;
q = max(min(q,1),0);
end
end
Example usage:
% Test_bilinearInverse.m
p1=[0.1;-0.1];
p2=[2.2;-0.9];
p3=[1.75;2.3];
p4=[-1.2;1.1];
q0 = rand(2,1);
s0 = q0(1);
t0 = q0(2);
p = p1*(1-s0)*(1-t0) + p2*s0*(1-t0) + p3*s0*t0 + p4*(1-s0)*t0;
iter=5;
q = bilinearInverse(p,p1,p2,p3,p4,iter);
err = norm(q0-q);
disp(['Error after ',num2str(iter), ' iterations: ', num2str(err)])
Example output:
>> test_bilinearInverse
Error after 5 iterations: 1.5701e-16
Fast vectorized 2D version:
function [ss,tt] = bilinearInverseFast(px,py, p1x,p1y, p2x,p2y, p3x,p3y, p4x,p4y, iter)
%Computes the inverse of the bilinear map from [0,1]^2 to the convex
% quadrilateral defined by the ordered points p1 -> p2 -> p3 -> p4 -> p1,
% where the p1x is the x-coordinate of p1, p1y is the y-coordinate, etc.
% Vectorized: if you have a lot of quadrilaterals and
% points to interpolate, then p1x(k) is the x-coordinate of point p1 on the
% k'th quadrilateral, and so forth.
%Uses Newton's method. Inputs must be column vectors.
ss = 0.5 * ones(length(px),1);
tt = 0.5 * ones(length(py),1);
for k=1:iter
r1 = p1x.*(1-ss).*(1-tt) + p2x.*ss.*(1-tt) + p3x.*ss.*tt + p4x.*(1-ss).*tt - px;%residual
r2 = p1y.*(1-ss).*(1-tt) + p2y.*ss.*(1-tt) + p3y.*ss.*tt + p4y.*(1-ss).*tt - py;%residual
J11 = -p1x.*(1-tt) + p2x.*(1-tt) + p3x.*tt - p4x.*tt; %dr/ds
J21 = -p1y.*(1-tt) + p2y.*(1-tt) + p3y.*tt - p4y.*tt; %dr/ds
J12 = -p1x.*(1-ss) - p2x.*ss + p3x.*ss + p4x.*(1-ss); %dr/dt
J22 = -p1y.*(1-ss) - p2y.*ss + p3y.*ss + p4y.*(1-ss); %dr/dt
inv_detJ = 1./(J11.*J22 - J12.*J21);
ss = ss - inv_detJ.*(J22.*r1 - J12.*r2);
tt = tt - inv_detJ.*(-J21.*r1 + J11.*r2);
ss = min(max(ss, 0),1);
tt = min(max(tt, 0),1);
end
end
For speed this code implicitly uses the following formula for the inverse of a 2x2 matrix:
[a,b;c,d]^-1 = (1/(ad-bc))[d, -b; -c, a]
Example usage:
% test_bilinearInverseFast.m
n_quads = 1e6; % 1 million quads
iter = 8;
% Make random quadrilaterals, ensuring points are ordered convex-ly
n_randpts = 4;
pp_xx = zeros(n_randpts,n_quads);
pp_yy = zeros(n_randpts,n_quads);
disp('Generating convex point ordering (may take some time).')
for k=1:n_quads
while true
p_xx = randn(4,1);
p_yy = randn(4,1);
conv_inds = convhull(p_xx, p_yy);
if length(conv_inds) == 5
break
end
end
pp_xx(1:4,k) = p_xx(conv_inds(1:end-1));
pp_yy(1:4,k) = p_yy(conv_inds(1:end-1));
end
pp1x = pp_xx(1,:);
pp1y = pp_yy(1,:);
pp2x = pp_xx(2,:);
pp2y = pp_yy(2,:);
pp3x = pp_xx(3,:);
pp3y = pp_yy(3,:);
pp4x = pp_xx(4,:);
pp4y = pp_yy(4,:);
% Make random interior points
ss0 = rand(1,n_quads);
tt0 = rand(1,n_quads);
ppx = pp1x.*(1-ss0).*(1-tt0) + pp2x.*ss0.*(1-tt0) + pp3x.*ss0.*tt0 + pp4x.*(1-ss0).*tt0;
ppy = pp1y.*(1-ss0).*(1-tt0) + pp2y.*ss0.*(1-tt0) + pp3y.*ss0.*tt0 + pp4y.*(1-ss0).*tt0;
pp = [ppx; ppy];
% Run fast inverse bilinear interpolation code:
disp('Running inverse bilinear interpolation.')
tic
[ss,tt] = bilinearInverseFast(ppx,ppy, pp1x,pp1y, pp2x,pp2y, pp3x,pp3y, pp4x,pp4y, 10);
time_elapsed = toc;
disp(['Number of quadrilaterals: ', num2str(n_quads)])
disp(['Inverse bilinear interpolation took: ', num2str(time_elapsed), ' seconds'])
err = norm([ss0;tt0] - [ss;tt],'fro')/norm([ss0;tt0],'fro');
disp(['Error: ', num2str(err)])
Example output:
>> test_bilinearInverseFast
Generating convex point ordering (may take some time).
Running inverse bilinear interpolation.
Number of quadrilaterals: 1000000
Inverse bilinear interpolation took: 0.5274 seconds
Error: 8.6881e-16
3D version:
Includes some code to display the convergence progress.
function ss = trilinearInverse(p, p1,p2,p3,p4,p5,p6,p7,p8, iter)
%Computes the inverse of the trilinear map from [0,1]^3 to the box defined
% by points p1,...,p8, where the points are ordered consistent with
% p1~(0,0,0), p2~(0,0,1), p3~(0,1,0), p4~(1,0,0), p5~(0,1,1),
% p6~(1,0,1), p7~(1,1,0), p8~(1,1,1)
%Uses Gauss-Newton method. Inputs must be column vectors.
tol = 1e-9;
ss = [0.5; 0.5; 0.5]; %initial guess
for k=1:iter
s = ss(1);
t = ss(2);
w = ss(3);
r = p1*(1-s)*(1-t)*(1-w) + p2*s*(1-t)*(1-w) + ...
p3*(1-s)*t*(1-w) + p4*(1-s)*(1-t)*w + ...
p5*s*t*(1-w) + p6*s*(1-t)*w + ...
p7*(1-s)*t*w + p8*s*t*w - p;
disp(['k= ', num2str(k), ...
', residual norm= ', num2str(norm(r)),...
', [s,t,w]= ',num2str([s,t,w])])
if (norm(r) < tol)
break
end
Js = -p1*(1-t)*(1-w) + p2*(1-t)*(1-w) + ...
-p3*t*(1-w) - p4*(1-t)*w + ...
p5*t*(1-w) + p6*(1-t)*w + ...
-p7*t*w + p8*t*w;
Jt = -p1*(1-s)*(1-w) - p2*s*(1-w) + ...
p3*(1-s)*(1-w) - p4*(1-s)*w + ...
p5*s*(1-w) - p6*s*w + ...
p7*(1-s)*w + p8*s*w;
Jw = -p1*(1-s)*(1-t) - p2*s*(1-t) + ...
-p3*(1-s)*t + p4*(1-s)*(1-t) + ...
-p5*s*t + p6*s*(1-t) + ...
p7*(1-s)*t + p8*s*t;
J = [Js,Jt,Jw];
ss = ss - J\r;
end
end
Example usage:
%test_trilinearInverse.m
h = 0.25;
p1 = [0;0;0] + h*randn(3,1);
p2 = [0;0;1] + h*randn(3,1);
p3 = [0;1;0] + h*randn(3,1);
p4 = [1;0;0] + h*randn(3,1);
p5 = [0;1;1] + h*randn(3,1);
p6 = [1;0;1] + h*randn(3,1);
p7 = [1;1;0] + h*randn(3,1);
p8 = [1;1;1] + h*randn(3,1);
s0 = rand;
t0 = rand;
w0 = rand;
p = p1*(1-s0)*(1-t0)*(1-w0) + p2*s0*(1-t0)*(1-w0) + ...
p3*(1-s0)*t0*(1-w0) + p4*(1-s0)*(1-t0)*w0 + ...
p5*s0*t0*(1-w0) + p6*s0*(1-t0)*w0 + ...
p7*(1-s0)*t0*w0 + p8*s0*t0*w0;
ss = trilinearInverse(p, p1,p2,p3,p4,p5,p6,p7,p8);
disp(['error= ', num2str(norm(ss - [s0;t0;w0]))])
Example output:
test_trilinearInverse
k= 1, residual norm= 0.38102, [s,t,w]= 0.5 0.5 0.5
k= 2, residual norm= 0.025324, [s,t,w]= 0.37896 0.59901 0.17658
k= 3, residual norm= 0.00037108, [s,t,w]= 0.40228 0.62124 0.15398
k= 4, residual norm= 9.1441e-08, [s,t,w]= 0.40218 0.62067 0.15437
k= 5, residual norm= 3.3548e-15, [s,t,w]= 0.40218 0.62067 0.15437
error= 4.8759e-15
One has to be careful about the ordering of the input points, since inverse multilinear interpolation is only well-defined if the shape has positive volume, and in 3D it is much easier to choose points that make the shape turn inside-out of itself.
回答3:
Here's my implementation of Naaff's solution, as a community wiki. Thanks again.
This is a C implementation, but should work on c++. It includes a fuzz testing function.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int equals( double a, double b, double tolerance )
{
return ( a == b ) ||
( ( a <= ( b + tolerance ) ) &&
( a >= ( b - tolerance ) ) );
}
double cross2( double x0, double y0, double x1, double y1 )
{
return x0*y1 - y0*x1;
}
int in_range( double val, double range_min, double range_max, double tol )
{
return ((val+tol) >= range_min) && ((val-tol) <= range_max);
}
/* Returns number of solutions found. If there is one valid solution, it will be put in s and t */
int inverseBilerp( double x0, double y0, double x1, double y1, double x2, double y2, double x3, double y3, double x, double y, double* sout, double* tout, double* s2out, double* t2out )
{
int t_valid, t2_valid;
double a = cross2( x0-x, y0-y, x0-x2, y0-y2 );
double b1 = cross2( x0-x, y0-y, x1-x3, y1-y3 );
double b2 = cross2( x1-x, y1-y, x0-x2, y0-y2 );
double c = cross2( x1-x, y1-y, x1-x3, y1-y3 );
double b = 0.5 * (b1 + b2);
double s, s2, t, t2;
double am2bpc = a-2*b+c;
/* this is how many valid s values we have */
int num_valid_s = 0;
if ( equals( am2bpc, 0, 1e-10 ) )
{
if ( equals( a-c, 0, 1e-10 ) )
{
/* Looks like the input is a line */
/* You could set s=0.5 and solve for t if you wanted to */
return 0;
}
s = a / (a-c);
if ( in_range( s, 0, 1, 1e-10 ) )
num_valid_s = 1;
}
else
{
double sqrtbsqmac = sqrt( b*b - a*c );
s = ((a-b) - sqrtbsqmac) / am2bpc;
s2 = ((a-b) + sqrtbsqmac) / am2bpc;
num_valid_s = 0;
if ( in_range( s, 0, 1, 1e-10 ) )
{
num_valid_s++;
if ( in_range( s2, 0, 1, 1e-10 ) )
num_valid_s++;
}
else
{
if ( in_range( s2, 0, 1, 1e-10 ) )
{
num_valid_s++;
s = s2;
}
}
}
if ( num_valid_s == 0 )
return 0;
t_valid = 0;
if ( num_valid_s >= 1 )
{
double tdenom_x = (1-s)*(x0-x2) + s*(x1-x3);
double tdenom_y = (1-s)*(y0-y2) + s*(y1-y3);
t_valid = 1;
if ( equals( tdenom_x, 0, 1e-10 ) && equals( tdenom_y, 0, 1e-10 ) )
{
t_valid = 0;
}
else
{
/* Choose the more robust denominator */
if ( fabs( tdenom_x ) > fabs( tdenom_y ) )
{
t = ( (1-s)*(x0-x) + s*(x1-x) ) / ( tdenom_x );
}
else
{
t = ( (1-s)*(y0-y) + s*(y1-y) ) / ( tdenom_y );
}
if ( !in_range( t, 0, 1, 1e-10 ) )
t_valid = 0;
}
}
/* Same thing for s2 and t2 */
t2_valid = 0;
if ( num_valid_s == 2 )
{
double tdenom_x = (1-s2)*(x0-x2) + s2*(x1-x3);
double tdenom_y = (1-s2)*(y0-y2) + s2*(y1-y3);
t2_valid = 1;
if ( equals( tdenom_x, 0, 1e-10 ) && equals( tdenom_y, 0, 1e-10 ) )
{
t2_valid = 0;
}
else
{
/* Choose the more robust denominator */
if ( fabs( tdenom_x ) > fabs( tdenom_y ) )
{
t2 = ( (1-s2)*(x0-x) + s2*(x1-x) ) / ( tdenom_x );
}
else
{
t2 = ( (1-s2)*(y0-y) + s2*(y1-y) ) / ( tdenom_y );
}
if ( !in_range( t2, 0, 1, 1e-10 ) )
t2_valid = 0;
}
}
/* Final cleanup */
if ( t2_valid && !t_valid )
{
s = s2;
t = t2;
t_valid = t2_valid;
t2_valid = 0;
}
/* Output */
if ( t_valid )
{
*sout = s;
*tout = t;
}
if ( t2_valid )
{
*s2out = s2;
*t2out = t2;
}
return t_valid + t2_valid;
}
void bilerp( double x0, double y0, double x1, double y1, double x2, double y2, double x3, double y3, double s, double t, double* x, double* y )
{
*x = t*(s*x3+(1-s)*x2) + (1-t)*(s*x1+(1-s)*x0);
*y = t*(s*y3+(1-s)*y2) + (1-t)*(s*y1+(1-s)*y0);
}
double randrange( double range_min, double range_max )
{
double range_width = range_max - range_min;
double rand01 = (rand() / (double)RAND_MAX);
return (rand01 * range_width) + range_min;
}
/* Returns number of failed trials */
int fuzzTestInvBilerp( int num_trials )
{
int num_failed = 0;
double x0, y0, x1, y1, x2, y2, x3, y3, x, y, s, t, s2, t2, orig_s, orig_t;
int num_st;
int itrial;
for ( itrial = 0; itrial < num_trials; itrial++ )
{
int failed = 0;
/* Get random positions for the corners of the quad */
x0 = randrange( -10, 10 );
y0 = randrange( -10, 10 );
x1 = randrange( -10, 10 );
y1 = randrange( -10, 10 );
x2 = randrange( -10, 10 );
y2 = randrange( -10, 10 );
x3 = randrange( -10, 10 );
y3 = randrange( -10, 10 );
/*x0 = 0, y0 = 0, x1 = 1, y1 = 0, x2 = 0, y2 = 1, x3 = 1, y3 = 1;*/
/* Get random s and t */
s = randrange( 0, 1 );
t = randrange( 0, 1 );
orig_s = s;
orig_t = t;
/* bilerp to get x and y */
bilerp( x0, y0, x1, y1, x2, y2, x3, y3, s, t, &x, &y );
/* invert */
num_st = inverseBilerp( x0, y0, x1, y1, x2, y2, x3, y3, x, y, &s, &t, &s2, &t2 );
if ( num_st == 0 )
{
failed = 1;
}
else if ( num_st == 1 )
{
if ( !(equals( orig_s, s, 1e-5 ) && equals( orig_t, t, 1e-5 )) )
failed = 1;
}
else if ( num_st == 2 )
{
if ( !((equals( orig_s, s , 1e-5 ) && equals( orig_t, t , 1e-5 )) ||
(equals( orig_s, s2, 1e-5 ) && equals( orig_t, t2, 1e-5 )) ) )
failed = 1;
}
if ( failed )
{
num_failed++;
printf("Failed trial %d\n", itrial);
}
}
return num_failed;
}
int main( int argc, char** argv )
{
int num_failed;
srand( 0 );
num_failed = fuzzTestInvBilerp( 100000000 );
printf("%d of the tests failed\n", num_failed);
getc(stdin);
return 0;
}
回答4:
Since you're working in 2D, your bilerp
function is really 2 equations, 1 for x and 1 for y. They can be rewritten in the form:
x = t * s * A.x + t * B.x + s * C.x + D.x
y = t * s * A.y + t * B.y + s * C.y + D.y
Where:
A = p3 - p2 - p1 + p0
B = p2 - p0
C = p1 - p0
D = p0
Rewrite the first equation to get t
in terms of s
, substitute into the second, and solve for s
.
回答5:
this is my implementation ... I guess it is faster than of tfiniga
void invbilerp( float x, float y, float x00, float x01, float x10, float x11, float y00, float y01, float y10, float y11, float [] uv ){
// substition 1 ( see. derivation )
float dx0 = x01 - x00;
float dx1 = x11 - x10;
float dy0 = y01 - y00;
float dy1 = y11 - y10;
// substitution 2 ( see. derivation )
float x00x = x00 - x;
float xd = x10 - x00;
float dxd = dx1 - dx0;
float y00y = y00 - y;
float yd = y10 - y00;
float dyd = dy1 - dy0;
// solution of quadratic equations
float c = x00x*yd - y00y*xd;
float b = dx0*yd + dyd*x00x - dy0*xd - dxd*y00y;
float a = dx0*dyd - dy0*dxd;
float D2 = b*b - 4*a*c;
float D = sqrt( D2 );
float u = (-b - D)/(2*a);
// backsubstitution of "u" to obtain "v"
float v;
float denom_x = xd + u*dxd;
float denom_y = yd + u*dyd;
if( abs(denom_x)>abs(denom_y) ){ v = -( x00x + u*dx0 )/denom_x; }else{ v = -( y00y + u*dy0 )/denom_y; }
uv[0]=u;
uv[1]=v;
/*
// do you really need second solution ?
u = (-b + D)/(2*a);
denom_x = xd + u*dxd;
denom_y = yd + u*dyd;
if( abs(denom_x)>abs(denom_y) ){ v = -( x00x + u*dx0 )/denom_x; }else{ v2 = -( y00y + u*dy0 )/denom_y; }
uv[2]=u;
uv[3]=v;
*/
}
and derivation
// starting from bilinear interpolation
(1-v)*( (1-u)*x00 + u*x01 ) + v*( (1-u)*x10 + u*x11 ) - x
(1-v)*( (1-u)*y00 + u*y01 ) + v*( (1-u)*y10 + u*y11 ) - y
substition 1:
dx0 = x01 - x00
dx1 = x11 - x10
dy0 = y01 - y00
dy1 = y11 - y10
we get:
(1-v) * ( x00 + u*dx0 ) + v * ( x10 + u*dx1 ) - x = 0
(1-v) * ( y00 + u*dy0 ) + v * ( y10 + u*dy1 ) - y = 0
we are trying to extract "v" out
x00 + u*dx0 + v*( x10 - x00 + u*( dx1 - dx0 ) ) - x = 0
y00 + u*dy0 + v*( y10 - y00 + u*( dy1 - dy0 ) ) - y = 0
substition 2:
x00x = x00 - x
xd = x10 - x00
dxd = dx1 - dx0
y00y = y00 - y
yd = y10 - y00
dyd = dy1 - dy0
// much nicer
x00x + u*dx0 + v*( xd + u*dxd ) = 0
y00x + u*dy0 + v*( yd + u*dyd ) = 0
// this equations for "v" are used for back substition
v = -( x00x + u*dx0 ) / ( xd + u*dxd )
v = -( y00x + u*dy0 ) / ( yd + u*dyd )
// but for now, we eliminate "v" to get one eqution for "u"
( x00x + u*dx0 ) / ( xd + u*dxd ) = ( y00y + u*dy0 ) / ( yd + u*dyd )
put denominators to other side
( x00x + u*dx0 ) * ( yd + u*dyd ) = ( y00y + u*dy0 ) * ( xd + u*dxd )
x00x*yd + u*( dx0*yd + dyd*x00x ) + u^2* dx0*dyd = y00y*xd + u*( dy0*xd + dxd*y00y ) + u^2* dy0*dxd
// which is quadratic equation with these coefficients
c = x00x*yd - y00y*xd
b = dx0*yd + dyd*x00x - dy0*xd - dxd*y00y
a = dx0*dyd - dy0*dxd
回答6:
Some of the responses have slightly misinterpreted your question. ie. they are assuming you are given the value of an unknown interpolated function, rather than an interpolated position p(x,y) inside the quad that you want to find the (s,t) coordinates of. This is a simpler problem and there is guaranteed to be a solution that is the intersection of two straight lines through the quad.
One of the lines will cut through the segments p0p1 and p2p3, the other will cut through p0p2 and p1p3, similar to the axis-aligned case. These lines are uniquely defined by the position of p(x,y), and will obviously intersect at this point.
Considering just the line that cuts through p0p1 and p2p3, we can imagine a family of such lines, for each different s-value we choose, each cutting the quad at a different width. If we fix an s-value, we can find the two endpoints by setting t=0 and t=1.
So first assume the line has the form:
y = a0*x + b0
Then we know two endpoints of this line, if we fix a given s value. They are:
(1-s)p0 + (s)p1
(1-s)p2 + (s)p3
Given these two end points, we can determine the family of lines by plugging them into the equation for the line, and solving for a0 and b0 as functions of s. Setting an s-value gives the formula for a specific line. All we need now is to figure out which line in this family hits our point p(x,y). Simply plugging the coordinates of p(x,y) into our line formula, we can then solve for the target value of s.
The corresponding approach can be done to find t as well.
回答7:
If all you have is a single value of p, such that p is between the minimum and maximum values at the four corners of the square, then no, it is not possible in general to find a SINGLE solution (s,t) such that the bilinear interpolant will give you that value.
In general, there will be an infinite number of solutions (s,t) inside the square. They will lie along a curved (hyperbolic) path through the square.
If your function is a vector valued one, so you have two known values at some unknown point in the square? Given known values of two parameters at each corner of the square, then a solution MAY exist, but there is no assurance of that. Remember that we can view this as two separate, independent problems. The solution to each of them will lie along a hyperbolic contour line through the square. If the pair of contours cross inside the square, then a solution exists. If they do not cross, then no solution exists.
You also ask if a simple formula exists to solve the problem. Sorry, but not really that I see. As I said, the curves are hyperbolic.
One solution is to switch to a different method of interpolation. So instead of bilinear, break the square into a pair of triangles. Within each triangle, we can now use truly linear interpolation. So now we can solve the linear system of 2 equations in 2 unknowns within each triangle. There may be one solution in each triangle, except for a rare degenerate case where the corresponding piecewise linear contour lines happen to be co-incident.
回答8:
Well, if p is a 2D point, yes, you can get it easily. In that case, S is the fractional component of the total width of the quadrilateral at T, T is likewise the fractional component of the total height of the quadrilateral at S.
If, however, p is a scalar, it's not necessarily possible, because the bilinear interpolation function is not necessarily monolithic.