This only turns a 24-bit depth buffer into a 31-bit one (unless you throw in the sign bit to make 32). Though until that depth range issue is fixed, it only provides 30-bits of precision (positive exponents are never used). That’s useful, and it does change the bit distribution to a degree. But the distribution of bits is still heavily weighted towards the front, and generally in the same proportions.
Um, er no. A floating pointer number is not 31 bits of precision + one sign bit, there is the exponent in there too.
When you use a floating point depth buffer in unextended GL3 (NV_depth_buffer_float has an unclamped glDepthRange, so in there the discussion is different), you reverse the role of 1.0 and 0.0 i.e. you call:
glDepthRange(1.0, 0.0);
and reverse the depth test direction too, the nature of accuracy of floating point biz makes this the better thing to do.
Now onto the maths:
(1) you should not be doing the precision question with z_n (z normalized device co-ordinates) since that is not the value stored in the depth buffer, it is z_window, so that is what you need to futz with, there we get:
z_w = (1+n/z_eye)*f/(f-n)
which then becomes
z_eye = n/( z_w*(f-n)/f - 1 )
simplifying becomes
z_eye = nf/( z_w*(f-n) - f)
You are correct that replaying n by tn and f by tf and you can futz to see that z_eye/n is a function of z_w and f/n:
z_eye/n = RR/( z_w(R-1) - R)
where R=(f/n).
Ick gross and actually useless for this discussion. The discussion is given projective matrix and z_eye, what is stored in the depth buffer. Using the depth buffer to get z_eye is never a good idea and is pointless here. We are only worried about given z_eye, what is then stored in depth buffer. The above gives a good reason why differed shaders store z_eye linearly and do not use the depth buffer to calculate z_eye. For that calculation what does matter is f/n, but the round off error gets real bad real fast.
Of course it is; you took the zFar plane out of the equation. I’m not doubting the ability to do this; I’m doubting:
1: The utility of doing it. Particularly for rendering geometry.
2: The practical need to talk about it at this point in the tutorial series.
I agree with (1), mostly, i.e. taking zFar to infinity.
What is important, in my opinion, is to give a detailed analysis of the effect of zFar and zNear. Saying that precision is based from zNear/zFar is dead wrong. I and others took this up by pointing out that one can make zFar=infinity and the loss of precision is tiny. The correct thing to do for a tutorial is to write out z_w given z_eye, zNear and zFar and then notice what matters are the ratios zNear to z_eye and to a very small extant zFar to (zFar-zNear). It is you tutorial, so you can do whatever you like, but considering the effort you have placed into it, putting a good, accurate discussion on projective matrices would make it stand above many others.
Just to state again, the issue is this: given a z_eye and a perspective projection matrix, how much does z_eye have to change so that the fixed 24-bit depth buffer will produce different values? That is the exact question to answer to know how to avoid z-fights. In that line, then one needs to know z_w given z_eye and the projection matrix, in this case having zNear and zFar is all that is needed. As I wrote adnauseum.
Also, why are you ignoring the derivation I wrote down previously? It clearly shows that z_w is a function of the ratios z/zNear and zFar/(zFar-zNear), which in turn states precisely what happens to the bits in the depth buffer.