It depends entirely upon the specific transformation.
Conventionally, object space, and eye space are right-handed, NDC is left-handed.
With the legacy API, transformations constructed with glTranslate
, glRotate
, and glScale
with positive scale factors will have positive determinants and so won’t result in a mirror image. Perspective transformations constructed with glFrustum (with right>left, top>bottom, far>near) or gluPerspective (with far>near) will have a negative determinant and will change a right-handed coordinate system to a left-handed system (and vice versa).
Note that NDC being conventionally left-handed is because greater depth is conventionally “farther” from the viewpoint. This arises from glDepthFunc(GL_LESS)
and glDepthRange(0,1)
being the initial state: fragments with lesser depth occlude those with greater depth, so greater depth appears farther from the viewpoint. Using glDepthFunc(GL_GREATER)
or glDepthRange(1,0)
(but not both) will invert that.
Ultimately it’s all just numbers; it’s up to the programmer to impose a “ground truth”, to make those numbers mean something.
Have you been using Unity? It likes to insist that Y is “vertical”. Which is pretty stupid, but it’s a popular engine so we’re kind of stuck with it rotting programmers’ brains with that idea.
OpenGL doesn’t try to impose any physical interpretation on object coordinates (it imposes a physical interpretation on NDC, in the sense that X/Y are in the plane of the screen while Z is depth). If you have a sane world coordinate system where X-Y is the ground plane and Z is vertical, that just means that the initial “camera” orientation is facing downwards (plan view).
If your code simulates a human or other entity which focuses its view roughly on the horizon rather than on its feet, you’ll want to rotate the camera 90° anti-clockwise about the (positive) X axis (or rotate the world 90° clockwise, seeing as though OpenGL doesn’t actually have a “camera”). That is all.
Where to put the 90° rotation depends upon which coordinate system you want to use for any other parts of the view transformation. Personally, I’d put it directly after the projection, so the view transformation is manipulated in a coordinate system where Z is vertical. IOW, a camera with an identity rotation is facing downward with north at the top. Unity wants a camera with an identity rotation to be facing north.
If you’re loading objects with Z vertical, then using
projection * rotX90 * view
will interpret view
as being in a Z-vertical coordinate system, while using
projection * view * rotX90
will interpret view
as being in a Z-backward (i.e. negative Z is forward) coordinate system. You’ll need to take that into account in your camera motion (walking, flying, etc) code.