Depth test and selection (help!)

I am trying to write code for my 3d model editor which allows user to select points on a mesh by dragging a rectangle across the screen. This works fine if I just use feedback to get the screen coordinates of the points then check them against the coords of the rectangle. However the problem with this is that feedback is generated for objects regardless of whether they are occluded or not (and generally 1/2 the points on a mesh are occluded by the mesh itself). This means that points on both sides of the mesh are selected at the same time. I only want visible points to be selected. It was suggested that I use selection instead, so I implemented picking using selection. This makes no difference. Object names are still returned in the selection array regardless of whether they are visible or not.
So now I am trying to use the depth information from the selection combined with the screen coordinate information from the feedback to check if a point is visible or not, but I can’t get it to work.
Firstly in the red book the demo for selection says the equation for getting the depth value from the selection array is:
(float) *buffer / 0x7fffffff
This is wrong or I have implemented the selection wrong (pretty sure its the former).
When I perform this calculation i always get values around the 1.999xxxxx region. The required divisor seems to be 0xffffffff (i.e 2^32 - 1). Is this correct?
My program flow for selection runs like so:
Draw feedback render using gluPickMatrix so that I don’t get feedback for any redundant data. (is it okay to use gluPickMatrix when not picking?)
Parse feedback array, save pass through values along with screen coordinates.
Draw selection render using same gluPickMatrix.
Parse selection buffer, calculate min z value (only drawing 1 point per name so min and max z should be the same, right?). Read name, get matching x and y coordinates from feedback info. Use glReadPixels(GL_DEPTH_COMPONENT, GL_FLOAT …) with the x and y screen coordinates to find out the actual depth value at that point on the screen. If the calculated z value from the selection array is the same as the actual z then its a visible point so select it.
Ofcourse it doesn’t work, so has anyone got any idea what the problem might be?


I would suspect that it’s highly unlikely that depth values returned from selection would match those returned from ReadPixels. (If for no other reason, the depth buffer is populated by rasterization that’s typically hardware accelerated, and selection, AFAIK, is always implemented in software, so the precision and interpolation is likely different.) But you’ve already determined that empirically.

I’m guessing that you are not backface-culling. In theory, backface-culling during selection will perfectly select only the visible elements of the surface on convex hulls, since no selection hits are recorded for culled polygons, and hypothetical selection rays passing through a convex hull hits exactly 2 points: one facing the viewer, and one facing away, the latter being culled. But in practice, it’s typically good enough for non-convex surfaces as well. The only potential problem is that when, from a certain perspective, the surface “doubles back” on itself, in which case you might select a front-facing polygon which may be occluded by another part of the mesh.

Another option, although I’ve never used it in practice, is ARB_occlusion_query. If performance is an issue, this may be the way to go, since SelectBuffer picking is never hardware accelerated, but occlusion_query is. The basic idea is to first render all the large objects in a scene, then run occlusion queries on bounding boxes of the smaller objects that you suspect might get occluded by the bigger ones. Each query returns the number of samples (or pixels) that would have been rasterized, i.e. visible. You can use this for picking by first rendering your mesh (as polygons), then render your vertices as tiny bounding boxes so that the query can return something useful, and also provide for some slop so that the user doesn’t have to click on the exact pixel containing a vertex. In fact, I’m not even sure a GL_POINT would register a single pixel in an occlusion query.

As with SelectBuffer, occlusion_query would require gluPickMatrix (or equivalent modification of the projection matrix) to constrain picks to the area around the cursor. And yes, gluPickMatrix can be used independently of SelectBuffer picking, for any purpose you want. For instance, try calling gluPickMatrix in your rendering pass to constrain rendering itself. Even better, try calling it conditionally, e.g. toggled by a keystroke, and you’ll have a really simple in-place zoom feature.