glReadPixels depth vs gluProject winz


I’m doing some sort of point occlusion culling using the depth buffer for a complicated selection mechanism. I project vertices to screen coordinates and then do a manual depth test by comparing the returned winz (gluProject) with the result of a glReadPixels (with GL_DEPTH_COMPONENT) at winx,winy (also returned by gluProject.

However, winz returned by gluProject seems to be in a different coordinate system than the depth buffer results returned by glReadPixels, even though they both lie in the range [0-1].

Is there a way I can convert the return value of either of the functions so they are in same coordinate system, and thus can be compared?

Thanks. :slight_smile:

i think by specification the depth buffer is only guaranteed te be monotone… im not sure, but i think any function is “allowed” to compress the values therein. it in fact do some thing that moves the accuracy to the viewer, but that maybe is only because of varying floating point representation accuracy.
so if the coordinates don’t match… i would not guarantee for anything, even if you find the relation by try’n error (or “graphing” the function by a ortho projected slope surface maybe).

In other words, the depth values returned by glReadPixels may vary (per hardware)?

I’d taken in account rounding errors by allowing values within a certain range (and clicking on a surface near the target node would also select it), but maybe I’m better off writing a ray-tri intersection routine.

Thanks for the info.

I’m agree with you. Dealing with hardware z-buffer is a real knightmare if you need precise 3D information. A ray-tri algorithm is more appropriate. However parsing triangles list for interesection may be costly (even if you have speedup structure like octree). One solution, which maybe you’ve already think of, is to make an objectID rendering. This means you affect a unique color to each triangle and then you do a rendering of your colored triangle with depth buffer enabled. As the result, for each pixel you get immediatly the right visible triangle and the only computation you have is a ray/plan intersection (not triangle). It’s constant time ! :smiley:

This suppose obviously you have somewhere in memory a 1D array for triangle ID indexing.
It works very fine for me.

Well known limitation : number of color. For an RGB buffer you can index up to 16 000 000 triangles. However, higher if you read RGBA pixel.


Originally posted by remdul:

Is there a way I can convert the return value of either of the functions so they are in same coordinate system, and thus can be compared?

Maybe this thread will help you out with converting depth values from glReadPixels():;f=11;t=000544

@Pascalito: I’ve used color selection before, but it had many limitations, and in the end proved unrelyable. For example, if the user has a display setting of 16-bits rather than 24-bit, the rasterized colors are slightly different, and thus the indices are no longer valid. There is a ways to work around that, but only complicates the process.

@def: looks like a possible solution there, thanks! :slight_smile:

I’m agree with you Remdul, it fails with 16 bits color depth, but is it essential since many graphics cards deals with 24 bits color depth ? Maybe you need to work on hardware such as mobile device phone ?

I’ve tried the gluUnproject function. It works well but doesn’t suit my requierement in terms of precision because of depth buffer. 16 bits, 24 bits ? It’s not fixed. It depends on your hardware. Moreover I’ve noticed that reading the depth buffer can be extremly slow on some hardware. For instance, on those which uses the z-buffer compression (I’ve measured a factor 15 compared to reading color buffer on these hardware).

The triangle color index works very well for me without constraint except those mentioned before. I’m using it for precise lighting calculation as a geometry query in a kind of ray-tracer.

However, maybe you are using some extra features that makes you this technique unavailable.