My main question is: what is the correspondance between Depth buffer and opengl render coordinates?

The guess is that for near plane we have 0x00000000 and for far plane we have 0xFFFFFFFF.
I use GL_UNSIGNED_INT for grabbing depth.

The thing is this correspondance works but it seems not very correctly.

For example: I put object at (0,0,0), camera at (0,0,-6).
Plane near = 1, far = 5000.
All values in depth are around 0xE8E8E8E8. However, they should be somewhere around 0.

And if I put far = 5, I get similar values (which are probably correct in this case).

Why this happens?

In addition there is somewhat wierd way to set up projection matrix, and this might be one of the reasons.

I am using other peoples code. In this code shaders are used to show a scene. In the code either glFrustum or gluPerspective are not used. Instead, perspective matrix is set in the code and used in shader script.

where width ad height are calculated from field of view and geometric sizes of near plane.
I also wonder why these calculations are not the same as the one given in documentation for gluPerspective

I think your error is assuming that the depth buffer Z values are assigned linearly from near->far. With a perspective projection, they’re not. They’re assigned roughly to -1/eye_space_z, with 0.0 = near, 1.0 = far.

What’s actually stored is window-space Z:

EYE-SPACE -> CLIP-SPACE -> NDC-SPACE -> WINDOW SPACE

Making some standard assumptions, the window space Z in terms of eye-space Z is something like:

z_win = 1/(f-n)*[(f+n)/2 + fn/z_eye]+1/2

Assumptions including: perspective projection, DepthRange is 0…1, w_eye = 1.

Crunching through your example, we have z_eye = -6 (assuming we’re looking directly at the origin), so z_win = 0.8335 (where 0 = near, 1 = far). Or mapping to a fixed-point 24-bit depth buffer, 0xD56041. Not quite your 0xE8E8E8, but in that general ballpark. So maybe your lookat vector wasn’t directly toward the origin.

I am sorry to replay quite late.
Yesterday I spent like a day to understand all these transformations and matrixes, but it seems I still did not get the point.

Anyway, I have found the gluProject function. This is similar to what I do, but I do not want to use glu library, so I have to implement it by myself.

Dark Photon,
First, I do not understand why it is -1/eye_space_z.
According to gluProject source code and general description of gluProject,
there is a matrix multiplication only, but no vector inversion( or other thing that gives 1/eye_space_z).
So, what is truth?

Second, in docs about gluProject it is said that it uses ModelView and Perspective matrixes. I would understand if it uses only View and Perspective matrixes, but how these calculations depend on Model orientation I do not understand.

I mean if I have 5 objects and each of them have different Model matrixes, then I will have 5 different answers?
But there should be just one answer.

for my second question -
you used z_eye instead of z_obj (as it is done in gluProject) for not to multiply by ModelView Matrix, but using only Perspective Matrix? I saw some description about coordinate transformations and terms (like z_obj, z_eye, z_clip …) in there

But still, don’t we need to use View Matrix?
If I use only Projection matrix I will get w = 0 (w is a 4-th coordinate of input vector).

Like, if v = (1, 0, 0, 1)’ and the last row of Projection matrix is always (0, 0, -1, 0) then I will get

v_new = P v = (x x x 0)’.
that is w = 0.
Thus, I cannot normalize coordinates…

And the funny thing it that with these calculations
ProjMatrix[10] = far/(near - far);
ProjMatrix[14] = near * far/(near - far);
z_e = [-far, -near];
which are done by other people in the code I already got z_n in [0, 1] interval!
Wow, I am not sure if it’s ok to have them in such interval right avay?