Problem interpreting depth values

If I do this:-

Render scene…

glReadPixels(0,0, 512, 512, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, buffer);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);

I should have a texture that has continous grey scale depicting the scenes depth buffer, shouldn’t I?
As it is, I only get a variance in grey when an object in the scene is up close to the camera - like it’s clamping to white after only a few depth units.
Why is this happening?

Right, I think I know why. It’s because the depth buffer is less precise at greater distances, so when the float in the depth buffer is converted to a byte, it is biased towards the near plane.
This means I can’t use the depth buffer for what I need.
Basically, I’m trying to create a height map from a polygonal model. I want to render the model from above, and copy the ‘distance-from-camera’ depth values into a height map. Now, is there any way I can simply convert the z-value of a vertex into a depth component? In other words, if the z value of a vertex was 3 then the value written into the depth buffer would be:-


Perhaps in a vertex program?

Weird, but simple solution : use linear fog.

The near clipping plane being the fog start, and the far plane being the fog end.
Draw you scene without color, and make an explicit color as fog color.
When you’ll retrieve the color buffer, you’ll have nice values representing a linear depth.

I recommend using glHint with GL_NICEST for fog, otherwise OpenGL implementation will approximate depth value to something that does not do what you want.

Do you see what I mean ?

[This message has been edited by vincoof (edited 02-11-2002).]

What a brilliant solution!
I can’t thank you enough, vincoof!

You more than welcome.

I recommend drawing the ‘uncolored’ scene with black (darkest black of course) and setting the fog color with either reg, green or blue and then get the red, green or blue buffer.
Also, to speed up things you could enable color masking to only one color of your choice.

Let me start apoligizing: I don’t usually like to bring back old topics. Can you explain to an 8 years old child what this technique means and how it works?
Imagine that the 8y.o. has somehow some experience of openGL :wink:

in a typical scene if u do
glReadPixels(0,0, 512, 512, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, buffer);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
then the resulting images color will be say 253-255 ie values 0-252 will be vacant (ie not utilised)
thats cause of the way the zbuffer works

with linear fog since the depth is evenly split up, the depth values are better utilised thus the image will contain colors from 0-255 and not just 253-255

edit -
also see the faq theres an explanation of the depth buffer precision there

Thank you Zed for your quick reply.

I am trying to build a distance field of a set of point on the XY plane. Imagine placing many cones with their tips placed at each point location and the cone axis parallel to the z axis.

The camera is placed on the Z axis, shooting a picture of the cones. If you colored the cones with different colors you would get the Voronoi diagram of the set of points (see: by the end of the page).

If the cones have the radius of the base circle equal to their height (which means slope = 45 degrees) then the depth of each pixel in the scene (minus the distance of the set of points from the camera) gives the distance of that particular pixel from the closest of the points in the set, which is the numerical distance field.

I was looking online if anyone has posted something like this before but I didn’t find it so far. I will try coding it myself.

Thanks, again!