Convert Lat/Longs to X/Y Co-ordinates

2019-01-08 23:57发布

问题:

I have the Lat/Long value of an small area in Melbourne; -37.803134,145.132377 and also a flat image of that,that I exported from openstreet map( Osmarender Image ). Image width : 1018 and Height:916

I would like to be able to convert, using C++, the Lat/Long to an X,Y coordinate where the point would reflect the location.

I used various formula's that I found in web like one given below but nothing helps.

var y = ((-1 * lat) + 90) * (MAP_HEIGHT / 180);
var x = (lon + 180) * (MAP_WIDTH / 360);

It would be of great help if any one give me clear explanation of how to do this . Any code would be much appreciated.

回答1:

You need more information than just a single lat/lon pair to be able to do this.

At this stage, the information you have provided is missing two things:

  • how large an area does your image cover (in terms of lat/lon)? Based on what you've provided, I don't know if the image shows an area a metre wide or a kilometre wide.
  • what spot on your image does your reference coordinate (-37.803134, 145.132377) refer to? Is it one of the corners? In the middle somewhere?

I'm also going to assume that your image is aligned north/south - for example, it doesn't have north pointing towards the top left corner. That would tend to complicate things.

The easiest approach is to figure out exactly what lat/lon coordinates correspond to the (0, 0) pixel and the (1017, 915) pixel. Then you can figure out the pixel corresponding to a given lat/lon coordinate through interpolation.

To briefly outline that process, imagine that your (-37.803134, 145.132377) lat/lon corresponds to your (0, 0) pixel, and that you've discovered that your (1017, 915) pixel corresponds to the lat/lon (-37.798917, 145.138535). Assuming the usual convention with pixel (0, 0) in the bottom-left corner, this means north is up in the image.

Then, if you are interested in the target coordinate (-37.801465, 145.134984), you can figure out the corresponding number of pixels up the image it is as follows:

pixelY = ((targetLat - minLat) / (maxLat - minLat)) * (maxYPixel - minYPixel)
       = ((-37.801465 - -37.803134) / (-37.798917 - -37.803134)) * (915 - 0)
       = 362.138

That is, the corresponding pixel is 362 pixels from the bottom of the image. You can then do exactly the same for the horizontal pixel placement, but using longitudes and X pixels instead.

The part ((targetLat - minLat) / (maxLat - minLat)) works out how far you are between the two reference coordinates, and gives 0 to indicate that you are at the first one, 1 to indicate the second one, and numbers in between to indicate locations in between. So for example, it would produce 0.25 to indicate that you are 25% of the way north between the two reference coordinates. The last bit converts that into the equivalent pixels.

HTH!

EDIT Ok, based on your comment I can be a little more specific. Given that you seem to be using the top left corner as your primary reference point, I'll use the following definitions:

minLat = -37.803134
maxLat = -37.806232
MAP_HEIGHT = 916

Then, if we use an example coordinate of (-37.804465, 145.134984), the Y coordinate of the corresponding pixel, relative to the top-left corner, is:

pixelY = ((targetLat - minLat) / (maxLat - minLat)) * (MAP_HEIGHT - 1)
       = ((-37.804465 - -37.803134) / (-37.806232 - -37.803134)) * 915
       = 393.11

Therefore, the corresponding pixel is 393 pixels down from the top. I'll let you work out the horizontal equivalent for yourself - it's basically the same. NOTE The -1 with the MAP_HEIGHT is because if you start at zero, the maximum pixel number is 915, not 916.

EDIT: One thing I'd like to take the opportunity to point out is that this is an approximation. In reality, there isn't a simple linear relationship between latitude and longitude coordinates and other forms of Cartesian coordinates for a number of reasons including the projections that are used while making maps, and the fact that the Earth is not a perfect sphere. In small areas this approximation is close enough as to make no significant difference, but on larger scales discrepancies may become evident. Depending on your needs, YMMV. (My thanks to uray, whose answer below reminded me that this is the case).



回答2:

if you're looking for accurate conversion of geodetic (lot,lan) into your defined cartesian coordinate (x,y meters from reference point), you can do with my code snippet here, this function will accept geodetic coordinate in radian and output the result in x,y

input:

  • refLat,refLon : geodetic coordinate which you defined as 0,0 in cartesian coordinate (the unit is in radian)
  • lat,lon : geodetic coordinate which you want to calculate its cartesian coordinate (the unit is in radian)
  • xOffset,yOffset : the result in cartesian coordinate x,y (the unit is in meters)

code:

#define GD_semiMajorAxis 6378137.000000
#define GD_TranMercB     6356752.314245
#define GD_geocentF      0.003352810664

void geodeticOffsetInv( double refLat, double refLon,
                        double lat,    double lon, 
                        double& xOffset, double& yOffset )
{
    double a = GD_semiMajorAxis;
    double b = GD_TranMercB;
    double f = GD_geocentF;

    double L     = lon-refLon
    double U1    = atan((1-f) * tan(refLat));
    double U2    = atan((1-f) * tan(lat));
    double sinU1 = sin(U1); 
    double cosU1 = cos(U1);
    double sinU2 = sin(U2);
    double cosU2 = cos(U2);

    double lambda = L;
    double lambdaP;
    double sinSigma;
    double sigma;
    double cosSigma;
    double cosSqAlpha;
    double cos2SigmaM;
    double sinLambda;
    double cosLambda;
    double sinAlpha;
    int iterLimit = 100;
    do {
        sinLambda = sin(lambda);
        cosLambda = cos(lambda);
        sinSigma = sqrt((cosU2*sinLambda) * (cosU2*sinLambda) + 
                        (cosU1*sinU2-sinU1*cosU2*cosLambda) * 
                        (cosU1*sinU2-sinU1*cosU2*cosLambda) );
        if (sinSigma==0)
        {
            xOffset = 0.0;
            yOffset = 0.0;
            return ;  // co-incident points
        }
        cosSigma    = sinU1*sinU2 + cosU1*cosU2*cosLambda;
        sigma       = atan2(sinSigma, cosSigma);
        sinAlpha    = cosU1 * cosU2 * sinLambda / sinSigma;
        cosSqAlpha  = 1 - sinAlpha*sinAlpha;
        cos2SigmaM  = cosSigma - 2*sinU1*sinU2/cosSqAlpha;
        if (cos2SigmaM != cos2SigmaM) //isNaN
        {
            cos2SigmaM = 0;  // equatorial line: cosSqAlpha=0 (§6)
        }
        double C = f/16*cosSqAlpha*(4+f*(4-3*cosSqAlpha));
        lambdaP = lambda;
        lambda = L + (1-C) * f * sinAlpha *
            (sigma + C*sinSigma*(cos2SigmaM+C*cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)));
    } while (fabs(lambda-lambdaP) > 1e-12 && --iterLimit>0);

    if (iterLimit==0)
    {
        xOffset = 0.0;
        yOffset = 0.0;
        return;  // formula failed to converge
    }

    double uSq  = cosSqAlpha * (a*a - b*b) / (b*b);
    double A    = 1 + uSq/16384*(4096+uSq*(-768+uSq*(320-175*uSq)));
    double B    = uSq/1024 * (256+uSq*(-128+uSq*(74-47*uSq)));
    double deltaSigma = B*sinSigma*(cos2SigmaM+B/4*(cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)-
        B/6*cos2SigmaM*(-3+4*sinSigma*sinSigma)*(-3+4*cos2SigmaM*cos2SigmaM)));
    double s = b*A*(sigma-deltaSigma);

    double bearing = atan2(cosU2*sinLambda,  cosU1*sinU2-sinU1*cosU2*cosLambda);
    xOffset = sin(bearing)*s;
    yOffset = cos(bearing)*s;
}


回答3:

I wouldn't worry too much about earth curvature. I haven't used openstreetmap before, but I've just had a quick look and it appears that they use a Mercator projection.

Which simply means that they've flattened the planet for you onto a rectangle, making X proportional to Longitude, and Y almost exactly proportional to Latitude.

So you can go ahead and use Mac's simple formulas and you will be very accurate. Your Latitude will be out by much less then a pixel's worth for the small map you are dealing with. Even on a map the size of Victoria you would only get an error of 2-3%.

diverscuba23 pointed out that you have to choose an ellipsoid... openstreetmap uses WGS84, and so does most modern mapping. However, beware that many maps in Australia use the older AGD66 which can differ by 100-200 metres or so.



回答4:

double testClass::getX(double lon, int width)
{
    // width is map width
    double x = fmod((width*(180+lon)/360), (width +(width/2)));

    return x;
}

double testClass::getY(double lat, int height, int width)
{
    // height and width are map height and width
    double PI = 3.14159265359;
    double latRad = lat*PI/180;

    // get y value
    double mercN = log(tan((PI/4)+(latRad/2)));
    double y     = (height/2)-(width*mercN/(2*PI));
    return y;
}