Horizontal coordinate system for spherical terrain

Does anyone have any information about horizontal coordinate system for reference frame like walking, driving, etc. on spherical terrain (converting local coordinate to topocentric horizon coordinate)? Does anyone know any information about quaternion for horizon coordinate system?

I googled some but they mentioned Z axis up. I need switch Z and Y axis for OpenGL coordinate so that it needs entirely different rotation matrix. I am unable find any source for OpenGL coordinates so far.

You have to be more specific, like what kind of coordinates are used to describe your spherical terrain (there are quite a few possible options), and what do you mean with “local coordinates”?

Just to illustrate to complexity of your question, there are a few kind of commonly used coordinate systems (Flat-Earthers can stop reading here :wink: ):

  1. Geographic coordinates: Describes the position of elements on a spherical or ellipsoidal body in two angular unit coordinates and optionally a linear unit coordinate for the distance to the body’s surface (AKA height above ground).
  2. Geocentric coordinates: Describes the position of elements on a spherical or ellipsoidal body in three linear unit coordinates. The center of the spherical or ellipsoidal body is the origin (0,0,0) of the coordinate system. The three axis are orthogonal to each other, as usually the Z axis is pointing from the center out of the body thru the north pole, the X axis is pointing from the center out of the body thru the intersection of the equator and the prime meridian, and the Y axis is pointing from the center out of the body thru the intersection of the equator and the 90E degree meridian.
  3. Projected coordinates: Imagine a piece of paper draped onto the surface of a section of the spherical or ellipsoidal body. Now imagine you could turn on a light at some location inside the body (usually that location is the center) and you could see the shadows of all the land masses (or whatever element you like) on that paper. Since the paper doesn’t perfectly bend to the surface in all directions that paper will touch the body in only a few locations, you’ll get distortion where angles or areas or distances seem wrong. (Look at the google maps map, where land masses near the poles are way to big, compared to the land masses near the equator.) The coordinates could be describes in any kind of unit, that fits the purpose best (linear units, like meters, pixels, etc. or angular units).
  4. Topocentric coordinates: Now imagine a stiff (like, cannot be deformed) rectangular box, put onto the surface of a spherical or ellipsoidal body, touching the body in only one point (often called the anchor). That points is usually used as the origin (0,0,0) to which all coordinates in that coordinate system are relative. The box is usually oriented that the edges (or axis) align with east and north directions on the surface of the body (resulting in the last axis pointing up for the surface). Usually the x axis aligns with east, the y axis with north, and the z axis is pointing up.

A coordinate system where all coordinates use the same linear unit and where all axis are orthogonal to each are is called a carthesian coordinate system. Geocentric and topocentric coordinate systems are carthesian coordinate systems, geographic and projected coordinate systems are not. As far a I know, quaternions can only work in carthesian coordinate systems (or projected coordinate systems that are assumed to be carthesian by ignoring the distortions).

The anchor of a topocentric coordinate system can be used to create an affine transformation - rotation, scale and translation operation combined into a matrix, which can be used to convert between both systems. In such a transformation matrix a swapping axes and flipping of directions can easily done at the same time.
Due to the translation part a topocentric coordinate system is considered a local coordinate system (as long as your coordinates are close to the origin).

If I should hazard a guess, your spherical terrain is defined in terms of projected coordinates, for which there is hopefully a transformation defined to get geographic coordinates or geocentric coordinates.

1 Like

Ok, I mean geocentric coordinates from latitude and longitude to vector in right-handed rule (Z points to me). I need horizon reference frame (rotation matrix) to make up vector points to sky from spherical terrain for camera direction that should point to horizon level where default north pole is (heading 0 degree). Then I can use controls to rotate camera around, move it to another location, etc.

    inline glm::dvec3 convertEquatorialToLocal(double lat, double lng, double rad)
        double slat = sin(lat), clat = cos(lat);
        double slng = sin(lng), clng = cos(lng);
        double xz = rad * clat;

        return { xz*clng, rad*slat, xz*-slng };

Your transformation (convertEquatorialToLocal) is limited to spheres and doesn’t support height above ground, which is fine if you don’t need that. But it does an additional 90 degree rotation to what is commonly used in the field of geo-informatics (y and z are swapped and the sign on your z component is inverted; see Geographic/geocentric conversions - EPSG:9602 for details on the full transformation), so watch out for that.

To continue your transformation towards your horizon reference frame, you want to convert your geocentric coordinates to topocentric coordinates, see Geocentric/topocentric conversions - EPSG:9836, with the location of the viewer being the origin of the topocentric CS. As said before, that coordinate system has the X coordinate pointing east, Y coordinate pointing north and Z pointing up.
To establish a model/view transformation with Z pointing towards the viewer, you just need to rotate 90 degrees around the X axis to have the viewer look towards north. (X will point towards left/east, Y points up screen/from ground and Z towards the viewer/south). And than you add (by multiplying with) your heading rotation.

Ok, I tried rotation matrix for horizon frame, but it did not work. I got horizon frame matrix from my orbital mechanics book.

// On 15 runway at Cape Kennedy
go.lat = glm::radians(28.632307);
gp.lng = glm::radians(-80.705774);
double clat = cos(go.lat), slat = sin(go.lat);
double clng = cos(go.lng), slng = sin(go.lng);
go.R = {-slng,       clng,      0,
        -slat*clng, -slat*slng, clat,
         clat*clng,  clat*slng, slat };

rrot = glm::dmat3(1); // local coordinate (0 heading)
grot = go.R * rrot;   // to local planet coordinate

It did not point to horizon level (northern - 0 heading). It points to that runway upside down (45 degrees off course from right angle) instead. Is that horizon ref frame, right? I need to swap Y and Z axis to point to horizon level (north), etc. because book showed that Z always points up. I tried 90 degrees X rotation, but it did not point to horizon level not working.

I’m pretty familiar with the coordinate transformation math, but sadly not with glm, so I’m not sure if you’re making an error with setting the go.R matrix like that. Since your using a flat array, make sure the order of elements matches the expectation of the glm matrix class (row-major vs column major).

To find potential issues in your approach, you’ll need to be more forthcoming. Describe the input data (maybe give a few example coordinates, so we can verify the order of axis and such), the whole transformation setup, and maybe make some screenshots of results.

If I would do try to implement something like this, I would first make sure the model data is transformed correctly, without applying addition view/camera transformation. Since the camera will start looking in negative Z direction without any view transformation and the data is expected to be in ENU (x=east; y=north; z=up) after the commonly used topocentric transformation, I would expect to get a result looking down on the data. (you might have to move the camera a bit along the z axis if the data is not in front of it)

My view matrix is using column major. If using row-major, it makes quaternion confusing opposite effects.
My book did not say column-major or row-major but using right-handed rule.

double alt = cbody->getRadius() + go.alt;
glm::dvec3 pos = cbody->getEquatorialToLocal(28.632307, -80.705774, alt);
go.R = glm::lookAt(pos, { 0, 0, 0 }, { 0, 1, 0 } );

It correctly points down to the runway 15. I tried rotating camera up with rotation matrix, but it did not align with horizon level but align with global XYZ axis. It only works with quaternion rotation. I will use quaternion instead of rotation matrix for camera direction.

I switched to quaternion rotation for topocentric horizon frame, but it still resulted the same. Here is initialization route (setGroundMode() call)

    // Clear all ground velocity controls
    go.av = { 0, 0, 0 };
    go.tv = { 0, 0, 0 };

    double rad = cbody->getRadius() + go.alt;
    cam.rpos = cbody->convertEquatorialToLocal(go.lat, go.lng, rad);
    go.R = glm::lookAt(cam.rpos, {0, 0, 0}, {0, 1, 0});
    go.Q = go.R;

Here is update call each frame.

    // rotate camera in local coordinate
    glm::dvec3 wv = go.av * 0.5;
    glm::dquat dr = glm::dquat(1.0, wv.x, wv.y, wv.z) * cam.rqrot;
    cam.rqrot = glm::normalize(cam.rqrot + dr);
    cam.rrot = glm::mat3_cast(cam.rqrot);

     // convert to local planetocentric coordinate
     gqrot = go.Q * cam.rqrot;
     grot  = glm::mat3_cast(gqrot);

It correctly points down to the runway 15 at right angle as default. I rotated camera up but it tilted off course. I had the same problem with matrix rotation before. It aligns with global XYZ axis instead of horizon ref frame. I am still figuring out elsewhere…

In planetocentric frame (orbit area), It functions correctly.

      // rotate camera in local coordinate
      glm::dvec3 wv = av * 0.5;
      glm::dquat dr = glm::dquat(1.0, wv.x, wv.y, wv.z) * cam.rqrot;
      cam.rqrot = glm::normalize(cam.rqrot + dr);
      cam.rrot = glm::mat3_cast(cam.rqrot);

      // move camera in global frame
      cam.rpos -= glm::conjugate(cam.rqrot) * tv;

      // convert to global coordinate from local planetocentric coordinate
      gspos = tgtObject->getuOrientation(0) * cam.rpos;
      gpos  = tgtObject->getoPosition() + gspos;
      grot  = tgtObject->getuOrientation(0) * cam.rrot;
      gqrot = grot;

In local planetocentric coordinate (orbtal space area), I tried rotate camera up and aligns with horizon level correctly. I am figuring why it did not work with topocentric horizon frame (ground space area).