Why do OpenGL projection matrices (Ortho/Frustum) map eye z into [-1, 1] instead of [0, 1]? Is there a numerical advantage in doing this, or is it just for symmetry with x and y in the unit cube?

What would be the disadvantages, if any, to using a mapping to [0, 1] with OpenGL?

Just to be clear, I’m not referring to problems with inappropriate near/far values or limitations of 24bit zbuffers.

Actually the projection matrix maps into clip-space, which is still homogenous. The clipping is done in these homgenous coordinates and xyz-components are clipped against the vertex’ -w and +w. Converting to normalized device coordinates is just the division by the clip-space w then and gives the unit cube.
The OpenGL viewport transformation expects z-coordinates in the range [-1,1]. Changing the perspective matrix to give results in the [0,1] range will render in the farer half of your z depth.
It was probably the more intuitive way to define it like that.
BTW, DX’s <ducks> viewport transformation expects z-coordinates in the range [0,1].

The viewport transformation is the problem here, and it seems there’s no escaping it. I looked high and low for a way to modify it to expand the full range of [0,1], to no avail. The best one can hope to do, apparently, is [1/2, 1].

And yes, I wasn’t going to mention it, but I was also hoping to sneak my D3D matrices into OpenGL (hangs head in shame).

Big thanks!

Edit: Is [1/2, 1] really such a bad thing, numerically, compared to [0, 1]? I’m still trying to get a grip on the floating point math here, but after fixed point quantization in the depth buffer, is there really an appreciable difference in ranges? If so, why?

Edit: After some review, and assuming 32 bit floating point with 1 sign bit, 23 bit mantissa and 8 bit exponent, we should have roughly 2 * 2^23 * 2 = 33,554,432 floating point values in [-1,1], half that for [0,1], and the distribution is symmetric but uneven (please correct me if goofed). So the impact ultimately seems to be a function of near/far and z granularity. I’ve used the so-called “z-trick” in the past to skip the zbuffer clear with a split depth range, but only for indoor scenes with modest depth ranges. Not sure how it’ll hold up in the great outdoors; having my doubts. (shrug)

I’m not sure what you want to achieve but for porting DX projection matrices to OpenGL you just need to make sure you got all coordinate system handedness issues between the two APIs sorted out and append a scale and translate matrix operation to the projection resulting in
z’ = z * 2 - 1.

That’s probably what I’ll end up doing (prepend a z transformation), something I was really hoping to avoid (having 2 partly different matrix/calibration libraries or an extra transformation tacked on at the end). Fortunately, handedness is not an issue (you can use a right-handed coordinate system in D3D, too).

I was also looking for a better understanding of what is happening with Z before it gets to the depth buffer (hence my plebeian numerical analysis). Something I’ve never really given much thought to, until now.

“Fortunately, handedness is not an issue (you can use a right-handed coordinate system in D3D, too).”

Not if you want fixed pipeline non-local viewer lighting to work. It’s hardcoded left-handed in DX. And also not if you want any of your shaders to be portable among applications.
Trust me, been there, done that.

“Not if you want fixed pipeline non-local viewer lighting to work. It’s hardcoded left-handed in DX. And also not if you want any of your shaders to be portable among applications.”

Yeah, fixed pipe is not a problem; don’t use it.

Shaders have no issues for me, that I can think of. Were you referring to something in particular?

If your shaders are isolated to your program, it’s your choice.
Standard DX programs are left-handed from object to screen space.
OpenGL is right-handed from object to view space. Clip and screen space are left-handed, the projection matrix flips it.

If you send original left-handed DX coordinates to OpenGL and adjust the handedness only with the matrices, the object space is not what it normally is in OpenGL and I can build shaders which would fail when written for the right-handed OpenGL object space e.g cross products.

Well, it’s been my experience that you can use “D3D” matrices (just a standard right-handed CS) directly in OpenGL with the same end result, except that the projection matrix and viewport transformation will short-sheet you in the z. In other words, the view transformations are the same; the difference is in the projection matrix, which, when compared side-by-side, differ only in the way the z is mapped.

Thanks for the heads up, though. If I encounter any problems, I’ll know what to look for first.