I have a large world, about 5,000,000 x 1,000,000 units. The camera can be near some object or far enough as to see the whole world.
I get the mouse position in world coordinates by unprojecting (Z comes from depth buffer).
The problem is that it involves a matrix inverse. When using big and small numbers (e.g. translating away from origin and scaling to see more world) at the same time, the calculations become unstable.
Trying to see the accuracy of this inverse matrix I look at the determinant. Ideally it never will be zero, due to the nature of transformation matrices. I know that being 'det' a small value means nothing on its own, it can be due to small values in the matrix. But it can also be a sign of numbers becoming wrong.
I also know I can calculate the inverse by inverting each transformation and multiplicaying them. Does it provide more accuracy?
How can I tell if my matrix is getting degenerated, suffer numerical issues?
for starters see Understanding 4x4 homogenous transform matrices
Improving accuracy for cumulative matrices (Normalization)
To avoid degeneration of transform matrix select one axis as main. I usually chose
Z
as it is usually view or forward direction in my apps. Then exploit cross product to recompute/normalize the rest of axises (which should be perpendicular to each other and unless scale is used then also unit size). This can be done only for orthogonal/orthonormal matrices so no skew or projections ...You do not need to do this after every operation just make a counter of operations done on each matrix and if some threshold crossed then normalize it and reset counter.
To detect degeneration of such matrices you can test for orthogonality by dot product between any two axises (should be zero or very near it). For orthonormal matrices you can test also for unit size of axis direction vectors ...
Here is how my transform matrix normalization looks like (for orthonormal matrices) in C++:
The vector operations looks like this:
Improving accuracy for non cumulative matrices
Your only choice is use at least
double
accuracy of your matrices. Safest is to use GLM or your own matrix math based at least ondouble
data type (like myreper
class).Cheap alternative is using
double
precision functions likewhich in some cases helps but is not safe as OpenGL implementation can truncate it to
float
. Also there are no 64 bit HW interpolators yet so all iterated results between pipeline stages are truncated tofloat
s.Sometimes relative reference frame helps (so keep operations on similar magnitude values) for example see:
Also In case you are using own matrix math functions you have to consider also the order of operations so you always lose smallest amount of accuracy possible.
Pseudo inverse matrix
In some cases you can avoid computing of inverse matrix by determinants or Horner scheme or Gauss elimination method because in some cases you can exploit the fact that Transpose of orthogonal rotational matrix is also its inverse. Here is how it is done:
So rotational part of the matrix is transposed, projection stays as was and origin position is recomputed so
A*inverse(A)=unit_matrix
This function is written so it can be used as in-place so callinglead to valid results too. This way of computing Inverse is quicker and numerically safer as it pends much less operations (no recursion or reductions no divisions). Of coarse this works only for orthogonal homogenuous 4x4 matrices !!!*
Detection of wrong inverse
So if you got matrix
A
and its inverseB
then:So multiply both matrices and check for unit matrix...
C
should be close to0.0
C
should be close to+1.0
After some experiments I see that (speaking of transformations, not any matrix) the diagonal (i.e. scaling factors) of the matrix (
m
, before inverting) is the main responsible for determinant value.So I compare the product
p= m[0] · m[5] · m[10] · m[15]
(if all of them are != 0) with the determinant. If they are similar0.1 < p/det < 10
I can "trust" somehow in the inverse matrix. Otherwise I have numerical issues that advise to change strategy for rendering.