Convert a bounding box in ECEF coordinates to ENU

2020-04-16 03:14发布

问题:

I have a geometry with its vertices in cartesian coordinates. These cartesian coordinates are the ECEF(Earth centred earth fixed) coordinates. This geometry is actually present on an ellipsoidal model of the earth using wgs84 corrdinates.The cartesian coordinates were actually obtained by converting the set of latitudes and longitudes along which the geomtries lie but i no longer have access to them. What i have is an axis aligned bounding box with xmax, ymax, zmax and xmin,ymin,zmin obtained by parsing the cartesian coordinates (There is no obviously no cartesian point of the geometry at xmax,ymax,zmax or xmin,ymin,zmin. The bounding box is just a cuboid enclosing the geometry).

What i want to do is to calculate the camera distance in an overview mode such that this geometry's bounding box perfectly fits the camera frustum.

I am not very clear with the approach to take here. A method like using a local to world matrix comes to mind but its not very clear.

@Specktre I referred to your suggestions on shifting points in 3D and that led me to another improved solution, nevertheless not perfect.

  1. Compute a matrix that can transfer from ECEF to ENU. Refer this - http://www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates
  2. Rotate all eight corners of my original bounding box using this matrix.
  3. Compute a new bounding box by finding the min and max of x,y,z of these rotated points
  4. compute distance
    • cameraDistance1 = ((newbb.ymax - newbb.ymin)/2)/tan(fov/2)
    • cameraDistance2 = ((newbb.xmax - newbb.xmin)/2)/(tan(fov/2)xaspectRatio)
    • cameraDistance = max(cameraDistance1, cameraDistance2)

This time i had to use the aspect ratio along x as i had previously expected since in my application fov is along y. Although this works almost accurately, there is still a small bug i guess. I am not very sure if it a good idea to generate a new bounding box. May be it is more accurate to identify 2 points point1(xmax, ymin, zmax) and point(xmax, ymax, zmax) in the original bounding box, find their values after multiplying with matrix and then do (point2 - point1).length(). Similarly for y. Would that be more accurate?

回答1:

  1. transform matrix

    first thing is to understand that transform matrix represents coordinate system. Look here Transform matrix anatomy for another example.

    In standard OpenGL notation If you use direct matrix then you are converting from matrix local space (LCS) to world global space (GCS). If you use inverse matrix then you converting coordinates from GCS to LCS

  2. camera matrix

    camera matrix converts to camera space so you need the inverse matrix. You get camera matrix like this:

    camera=inverse(camera_space_matrix)
    

    now for info on how to construct your camera_space_matrix so it fits the bounding box look here:

    • Frustrum distance computation

    so compute midpoint of the top rectangle of your box compute camera distance as max of distance computed from all vertexes of box so

    camera position = midpoint + distance*midpoint_normal
    

    orientation depends on your projection matrix. If you use gluPerspective then you are viewing -Z or +Z according selected glDepthFunc. So set Z axis of matrix to normal and Y,X vectors can be aligned to North/South and East/West so for example

    Y=Z x (1,0,0)
    X = Z x Y
    

    now put position, and axis vectors X,Y,Z inside matrix, compute inverse matrix and that it is.

[Notes]

Do not forget that FOV can have different angles for X and Y axis (aspect ratio).

Normal is just midpoint - Earth center which is (0,0,0) so normal is also the midpoint. Just normalize it to size 1.0.

For all computations use cartesian world GCS (global coordinate system).