With modern core OpenGL’s jettisoning of any builtin projection matrix API, is there any compelling reason that we should continue to work in eyespace along -z? As far as I understand, we only needed to account for this negative viewpoint if we were using glOrtho or glFrustum.

The view direction is specified in the modelview matrix, which has always been “available” to OpenGL users. Perhaps the fact that an identity modelview matrix looks down negative z is the reason for this convention, I wouldn’t know. I would also guess that terminology such as z-buffering come from this fact. In my opinion such terminology is in fact a good reason to keep using negative z as the view direction. Since, as you say, this only really matter in eye-space why not create a convention for people to follow?

glOrtho and glFrustum affect the projection matrix, which has nothing to do with the view direction. Or, it might when combined into the modelviewprojection matrix, but the projection has nothing to do with eye-space, to put it more clearly.

With modern core OpenGL’s jettisoning of any builtin projection matrix API, is there any compelling reason that we should continue to work in eyespace along -z?

The projection matrix is simpler that way. Though to be fair, you could have used any projection matrix with fixed-function OpenGL too. You could have put an inversion scale matrix along the Z axis into the GL_PROJECTION matrix if you really wanted to make “eyespace” point along +Z.

glOrtho and glFrustum affect the projection matrix, which has nothing to do with the view direction.

Actually they do have quite a lot to do with the view direction. Every matrix is a transformation from one specific space to another specific space. If you change what space you start from (by changing the space that the modelview matrix transforms into), you must also change the projection matrix in order to get the points into the correct clip-space.

So if you want to change the modelview space, you must also change the projection matrices.

This doesn’t make sense to me. Z-buffering works equally well in left- and right-hand coordinate systems. In fact, the default pixel space depth range effectively maps [-near, -far] to [0, 1], a flip in direction.

The transformation produced by glOrtho range-maps [-near, -far] to [-1, 1]. If you don’t make sure your model is in [-near, -far], it will be clipped. Putting it in [near, far] will yield a bunch of glClearColor. Thus, knowing that old OpenGL treats the view direction as negative z is quite important when thinking about projection.

It must be simpler for perspective only. glOrtho would become simpler if switched to a positive z-axis view direction. We’d be able to ditch that -2 scale factor along z and put in a nice positive 2.

The projection matrix is simpler that way. Though to be fair, you could have used any projection matrix with fixed-function OpenGL too. You could have put an inversion scale matrix along the Z axis into the GL_PROJECTION matrix if you really wanted to make “eyespace” point along +Z.

In my understanding eye-space coordinates are obtained by multiplying world-space coordinates by the modelview matrix. Clearly then eye-space coordinates depend only on the modelview matrix. The view-direction, expressed in world-space is thus present in the modelview matrix.

Actually they do have quite a lot to do with the view direction. Every matrix is a transformation from one specific space to another specific space. If you change what space you start from (by changing the space that the modelview matrix transforms into), you must also change the projection matrix in order to get the points into the correct clip-space.

So if you want to change the modelview space, you must also change the projection matrices.

I agree that the “standard” ways of computing various projection matrices assume -z view-direction

This doesn’t make sense to me. Z-buffering works equally well in left- and right-hand coordinate systems. In fact, the default pixel space depth range effectively maps [-near, -far] to [0, 1], a flip in direction.

Sure z-buffering will work regardless. I am just saying that the name will cease to make sense if you are looking down +y, for instance.

True, the builtin gl-functions for setting up the projection matrix assume -z viewing direction. As pointed out, it has always been possible to exactly specify the projection matrix using glLoadMatrix though.

It must be simpler for perspective only. glOrtho would become simpler if switched to a positive z-axis view direction. We’d be able to ditch that -2 scale factor along z and put in a nice positive 2.

Um, the difference between a negative and a positive is not what I mean by “simpler”. I mean that it has fewer non-zero terms and fewer controls over the eventual results.

In my understanding eye-space coordinates are obtained by multiplying world-space coordinates by the modelview matrix. Clearly then eye-space coordinates depend only on the modelview matrix.

OpenGL defines what clip-space means, as well as the subsequent spaces. Therefore, OpenGL defines what is forward and backward in clip space. The projection matrix transforms into clip space from eyespace, so the projection matrix defines what is forward and backwards in eyespace.

If you change your modelview matrices to transform the world into an eyespace with an inverted Z axis, then your non-inverted projection matrices will make things in clip-space backwards. The rest of the pipeline will clip as though your camera was backwards, and what you in modelview and eyespace think is forward is actually backward.

I finally got around to composing a +Z-based perspective projection matrix, and I do not find it any more complicated than the historical -Z one. The only difference is that the signs in column 3 are flipped. For the orthographic projection, only one sign is flipped.

It’s just a convention. So long as everybody’s working off the same convention then code is portable and reusable between applications, that’s all.

I’m guessing that the reason for the historic choice was that OpenGL’s coord system was influenced by Cartesian coords with positive Y going up and positive X going right - you know, the kind of charts you used to do when graphing an equation back in school. Hence the “bottom left is the origin” thing too, which tends to catch people out the first time they lay a UI out on screen using OpenGL.

Endless arguments could be made about which is more correct, but IMO it’s not worth getting religious over.