Help with transformations

I am not certain whether this the right forum (apologies if not). I have been having some issues with transformations that I am trying to resolve.

From the user side, navigation and operations are provided in a LHS coord. system such as,
image
Here the angles refer to yaw (or azimuth angles). Z are elevations.

I am using GLM to generate the transformations so that I can view information in this system correctly. This is how I went about it:
I switched y coordinates for the z coordinates.
I generated my my view directon to be points towards the target rather than towards the observer.
I used the LHS version of GLM’s LookAt() function, thinking that its already receiving data that is in a LHS coordinate.
I did the the same with GLM’s perspective() function (i.e. I used the LHS version).

Unfortunately what I get does not appear to behave well. When moving left or right I appear to be moving forward until I hit some point (target point) and then it goes in reverse.

That image shows a right-handed coordinate system.

Why?

Note that literally swapping Y and Z corresponds to a reflection in the plane Z=Y, which results in a mirror image (the transformation has a negative determinant).

If you want a view which is looking along the Y axis, rotate 90° about the X axis.

Thanks @GClements . You are absolutely right! I was misinterpreting what I had read (there are several versions of this out there)!

My understanding is that traditionally, we read object coordinates using a RHS. The view transformation will not change this coordinate system but the Projection transformation will switch it to a LHS. Correct?

So if I understand what you are proposing. I should first read in the objects which are in a RHS system except that the 3rd dimension is given as Z not as Y. Rotate 90 CW around X axis.
Then apply the VIEW and PROJECTION matrix.
Correct?

One last question, what is the effect (on the View Matrix) of reversing the view direction vector from the observer position to the target instead of from the target to the observer position?

Thanks again.

It depends entirely upon the specific transformation.

Conventionally, object space, and eye space are right-handed, NDC is left-handed.

With the legacy API, transformations constructed with glTranslate, glRotate, and glScale with positive scale factors will have positive determinants and so won’t result in a mirror image. Perspective transformations constructed with glFrustum (with right>left, top>bottom, far>near) or gluPerspective (with far>near) will have a negative determinant and will change a right-handed coordinate system to a left-handed system (and vice versa).

Note that NDC being conventionally left-handed is because greater depth is conventionally “farther” from the viewpoint. This arises from glDepthFunc(GL_LESS) and glDepthRange(0,1) being the initial state: fragments with lesser depth occlude those with greater depth, so greater depth appears farther from the viewpoint. Using glDepthFunc(GL_GREATER) or glDepthRange(1,0) (but not both) will invert that.

Ultimately it’s all just numbers; it’s up to the programmer to impose a “ground truth”, to make those numbers mean something.

Have you been using Unity? It likes to insist that Y is “vertical”. Which is pretty stupid, but it’s a popular engine so we’re kind of stuck with it rotting programmers’ brains with that idea.

OpenGL doesn’t try to impose any physical interpretation on object coordinates (it imposes a physical interpretation on NDC, in the sense that X/Y are in the plane of the screen while Z is depth). If you have a sane world coordinate system where X-Y is the ground plane and Z is vertical, that just means that the initial “camera” orientation is facing downwards (plan view).

If your code simulates a human or other entity which focuses its view roughly on the horizon rather than on its feet, you’ll want to rotate the camera 90° anti-clockwise about the (positive) X axis (or rotate the world 90° clockwise, seeing as though OpenGL doesn’t actually have a “camera”). That is all.

Where to put the 90° rotation depends upon which coordinate system you want to use for any other parts of the view transformation. Personally, I’d put it directly after the projection, so the view transformation is manipulated in a coordinate system where Z is vertical. IOW, a camera with an identity rotation is facing downward with north at the top. Unity wants a camera with an identity rotation to be facing north.

If you’re loading objects with Z vertical, then using

projection * rotX90 * view

will interpret view as being in a Z-vertical coordinate system, while using

projection * view * rotX90

will interpret view as being in a Z-backward (i.e. negative Z is forward) coordinate system. You’ll need to take that into account in your camera motion (walking, flying, etc) code.

As usual @GClements thanks for your explanations! This is very helpful.

No I am not using Unity. I am reading the information from a geographical coord. sys (mostly UTM projection) and then try to visualize in OpenGL. I want the user to query the visualization with the geographical system in mind (i.e. North= 0, East=90, elevations are Z).

Thanks again!