Tutorial Proofreading and Correcting.

I should also not post in the morning when I am tired either :o

Apparently I did not mess up my maths of z_n, z_w, whatever… no wonder no one corrected it… but I did make one mistake: how delta_z(z,zNear,zFar) varies, it is homogeneous since the expression for z_w is, i.e.

delta_z(tz, tzNear, tzFar) = tdelta_z(z,zNear,zFar)

but the real important bit on the derivation: the value of zNear to zFar is not important, the value of zNear to z_eye and to a much lesser extent zFar to (zFar-zNear) are what determines what happens.

The interesting and important bit: z_w is really just a function of the ratio zNear to z_eye, this also makes sense too.

Of course it is; you took the zFar plane out of the equation. I’m not doubting the ability to do this; I’m doubting:

1: The utility of doing it. Particularly for rendering geometry.

2: The practical need to talk about it at this point in the tutorial series.

Also on this subject one sees quite clearly that the ratio of zNear to zFar is not the important issue, but the ratios zNear to z and zFar to (zFar-zNear).

Given a particular Z value NDC space (Zndc), the function to compute the equivalent value in camera space is as follows:

Zcamera = -2 * N * F /(((N - F) * Zndc) + N + F)

If you multiply both N and F by the same factor t, then you get:

Zcamera = -2 * N * F * t^2/(((Nt - Ft) * Zndc) + Nt + Ft)

or

Zcamera = t * (-2 * N * F /(((N - F) * Zndc) + N + F))

Did I screw up my math somewhere? This suggests that it is proportional to the ratio of N and F.

The above also guides one very well on how to use a floating pointer depth buffer well

This only turns a 24-bit depth buffer into a 31-bit one (unless you throw in the sign bit to make 32). Though until that depth range issue is fixed, it only provides 30-bits of precision (positive exponents are never used). That’s useful, and it does change the bit distribution to a degree. But the distribution of bits is still heavily weighted towards the front, and generally in the same proportions.

Presenting projection matrices with the little bits of math behind it (we are talking high school algebra here) will take the magic out of the entire process which is critical to creating a good tutorial.

I do present the projection transformations with the math behind them.

This only turns a 24-bit depth buffer into a 31-bit one (unless you throw in the sign bit to make 32). Though until that depth range issue is fixed, it only provides 30-bits of precision (positive exponents are never used). That’s useful, and it does change the bit distribution to a degree. But the distribution of bits is still heavily weighted towards the front, and generally in the same proportions.

Um, er no. A floating pointer number is not 31 bits of precision + one sign bit, there is the exponent in there too.
When you use a floating point depth buffer in unextended GL3 (NV_depth_buffer_float has an unclamped glDepthRange, so in there the discussion is different), you reverse the role of 1.0 and 0.0 i.e. you call:

glDepthRange(1.0, 0.0);

and reverse the depth test direction too, the nature of accuracy of floating point biz makes this the better thing to do.

Now onto the maths:

(1) you should not be doing the precision question with z_n (z normalized device co-ordinates) since that is not the value stored in the depth buffer, it is z_window, so that is what you need to futz with, there we get:

z_w = (1+n/z_eye)*f/(f-n)

which then becomes

z_eye = n/( z_w*(f-n)/f - 1 )

simplifying becomes

z_eye = nf/( z_w*(f-n) - f)

You are correct that replaying n by tn and f by tf and you can futz to see that z_eye/n is a function of z_w and f/n:

z_eye/n = RR/( z_w(R-1) - R)

where R=(f/n).

Ick gross and actually useless for this discussion. The discussion is given projective matrix and z_eye, what is stored in the depth buffer. Using the depth buffer to get z_eye is never a good idea and is pointless here. We are only worried about given z_eye, what is then stored in depth buffer. The above gives a good reason why differed shaders store z_eye linearly and do not use the depth buffer to calculate z_eye. For that calculation what does matter is f/n, but the round off error gets real bad real fast.

Of course it is; you took the zFar plane out of the equation. I’m not doubting the ability to do this; I’m doubting:

1: The utility of doing it. Particularly for rendering geometry.

2: The practical need to talk about it at this point in the tutorial series.

I agree with (1), mostly, i.e. taking zFar to infinity.

What is important, in my opinion, is to give a detailed analysis of the effect of zFar and zNear. Saying that precision is based from zNear/zFar is dead wrong. I and others took this up by pointing out that one can make zFar=infinity and the loss of precision is tiny. The correct thing to do for a tutorial is to write out z_w given z_eye, zNear and zFar and then notice what matters are the ratios zNear to z_eye and to a very small extant zFar to (zFar-zNear). It is you tutorial, so you can do whatever you like, but considering the effort you have placed into it, putting a good, accurate discussion on projective matrices would make it stand above many others.

Just to state again, the issue is this: given a z_eye and a perspective projection matrix, how much does z_eye have to change so that the fixed 24-bit depth buffer will produce different values? That is the exact question to answer to know how to avoid z-fights. In that line, then one needs to know z_w given z_eye and the projection matrix, in this case having zNear and zFar is all that is needed. As I wrote adnauseum.

Also, why are you ignoring the derivation I wrote down previously? It clearly shows that z_w is a function of the ratios z/zNear and zFar/(zFar-zNear), which in turn states precisely what happens to the bits in the depth buffer.

A floating pointer number is not 31 bits of precision + one sign bit, there is the exponent in there too.

The power of floating-point numbers comes from the distribution of those bits of precision, not how many there are. A 32-bit float can deal with a larger range of values than a 32-bit integer. But there are still only 2^32 possible different values for either case.

Because of the way floats are distributed, you would get more effective use out of a 32-bit floating-point z-buffer than a 32-bit normalized integer one. But that doesn’t change the fact that, given a [0, 1] range, you only get to use 30 of those bits (1 from the sign bit, and one from only using one sign of the exponent).

you should not be doing the precision question with z_n (z normalized device co-ordinates) since that is not the value stored in the depth buffer

The value stored in the depth buffer ultimately depends on the depth range. But since the depth range is a linear transform, and since we’re assuming the full use of the depth range, it can be freely ignored. The entire [-1, 1] NDC range will be mapped to the entire [0, 1] depth range. And the mapping is linear, so the precision distribution in window space directly matches the precision distribution in NDC space.

And since this section is all about analyzing the precision distribution, I don’t see the need to make already complex equations more complex.

Just to state again, the issue is this: given a z_eye and a perspective projection matrix, how much does z_eye have to change so that the fixed 24-bit depth buffer will produce different values? That is the exact question to answer to know how to avoid z-fights.

And to state again, this is not a useful metric for showing someone how the distribution of depth precision changes with depth range changes. The goal of this section is to show how changes to the zNear/zFar values affect the distribution of precision in the depth buffer relative to camera space.

I’m using a function of a single variable: the depth range. You’re talking about a 2 variable function: depth range and a particular Z. This is inherently more difficult to present to the user. One can be presented with simple tables or even graphs. The other requires 3D graphing and is much more difficult to visualize.

Don’t forget that these are tutorials, not the definitive work on a subject. These exist to be learning aids. That means information needs to be properly simplified for user consumption. Multivariate functions are not conducive to that.

Also, why are you ignoring the derivation I wrote down previously? It clearly shows that z_w is a function of the ratios z/zNear and zFar/(zFar-zNear), which in turn states precisely what happens to the bits in the depth buffer.

Because I derived an equation that clearly shows that the ratio of zNear to zFar is what is important. Is there something wrong with the math for that equation?

The power of floating-point numbers comes from the distribution of those bits of precision, not how many there are. A 32-bit float can deal with a larger range of values than a 32-bit integer. But there are still only 2^32 possible different values for either case.

Because of the way floats are distributed, you would get more effective use out of a 32-bit floating-point z-buffer than a 32-bit normalized integer one. But that doesn’t change the fact that, given a [0, 1] range, you only get to use 30 of those bits (1 from the sign bit, and one from only using one sign of the exponent).

I cannot figure out anymore if Alfonse is serious or just having a laugh. Isn’t is clear that I already know how floats work? Actually, looking at the above, he seems kind of hazy on it. Here is a pop quiz question for you Alfonse: why do you think that when using a floating point depth buffer, one should call glDepthRange(1.0, 0.0) (and thus change the direction of the depth tests)?

And to state again, this is not a useful metric for showing someone how the distribution of depth precision changes with depth range changes. The goal of this section is to show how changes to the zNear/zFar values affect the distribution of precision in the depth buffer relative to camera space.

I’m using a function of a single variable: the depth range. You’re talking about a 2 variable function: depth range and a particular Z. This is inherently more difficult to present to the user. One can be presented with simple tables or even graphs. The other requires 3D graphing and is much more difficult to visualize.

How about this: write down your function, f(t) where t=zFar/zNear, and tell me what it evaluates precisely. Please do.

Because I derived an equation that clearly shows that the ratio of zNear to zFar is what is important. Is there something wrong with the math for that equation?

I’d bet a lot there is something really wrong. Here is why: myself and others have given you evidence that letting zFar tend to infinity does not incur universal z-fighting. Moreover, the loss of precision of letting zFar tend to infinity has also been demonstrated to be tiny too. This directly contradicts saying it only depends on zFar/zNear. What we have stated, repeatedly, is the ratio of zNear/z_eye is where most of the action is.

Don’t forget that these are tutorials, not the definitive work on a subject. These exist to be learning aids. That means information needs to be properly simplified for user consumption. Multivariate functions are not conducive to that.

Well if you had read something for a change, you would have noticed that what matters is the ratio zNear/z_eye, which is one value and determines what is placed in the depth buffer, which with some thinking could then be used to make a plot: zNear/z_eye against discretized z_w to show the thresholds of when the depth buffer changes.

Those tutorials are yours and you can write whatever you want in them. However, you are now at a cross roads: do you write real tutorials and learning material or just the same junk plastered all over the place: copy-paste-able code which new comers copy and don’t understand why it works. Your tutorials, your choice.

Alfonse, where is the download link for the code in your tutorials ?
I can not find it anywhere ?

Does the download link from the first page not work anymore?

I just uploaded a more recent version.

Well I have not seen how to go from the tutorial itself : http://www.arcsynthesis.org/gltut/ to the project page http://bitbucket.org/alfonse/gltut/downloads?highlight=8970

Good point.

Alfonse, you really should put a link from the website to the code, or at least add the link in your signature …

It’s done now.