Driver problems when using glReadPixels(...GL_DEPTH_COMPONENT..) with gluUnProject?

I, like many people, use code along these lines to obtain world coordinates from screen coordinates (the crosshair position, in my case):

float cursordepth;
GLint viewport[4];
GLdouble mm[16], pm[16];

glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, mm);
glGetDoublev(GL_PROJECTION_MATRIX, pm);

glReadPixels(w/2, h/2, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &cursordepth);
double worldx = 0, worldy = 0, worldz = 0;
gluUnProject(w/2, h/2, cursordepth, mm, pm, viewport, &worldx, &worldz, &worldy);

This gives me correct coordinates on most cards, e.g. every nvidia card/driver, and almost all radeon cards/drivers. But it gives wrong coordinates (too close/very close) coordinates on some cards, mostly ones with old drivers, but sometimes also something recent.

So what is the problem here?

  • did I make a mistake?
  • are drivers simply that buggy? why is this code so “fragile” and do many drivers get it wrong? is there a workaround?
  • do maybe some drivers use different defaults for GL_DEPTH_SCALE/GL_DEPTH_BIAS ?
  • is it possible that the driver reorders polygon drawing commands past glReadPixels? right now the order of a frame is:
    • draw world
    • the code above
    • draw HUD, including crosshair which overwrites the above pixel
      If somehow the crosshair is drawn before the glReadPixels that would explain the problem, but I don’t think that is legal for a driver to do.

Any hints?

In what way is it wrong? Which direction is it displaced in?

The most likely issue is that the value from the depth readback does not correlate correctly with what you expect from OpenGL, or that depth buffer precision is just not what you expect it to be, than makes sense with older cards and as a result you’re not picking on the surfaces.

Try different depth buffer sizes and retest on various cards to see if this is correct.

If this is the problem getting an initial hit then adjusting the clip planes to sandwitch the approximate depth and rerendering and reading back with another unproject would let you dial up the accuracy. glScissor etc would be your friend.

&worldx, &worldz, &worldy
?

The read with a GL_FLOAT is also something which adds another chance to be buggy in drivers.
Try GL_UNSINGED_INT which should put the actual depth buffer bits at the most significant bits and convert yourself.

dorbie: it appears to be right in front of the camera, even when the “wall” the crosshair is pointing at is several “meters” away. I don’t know for sure as I don’t have access to the cards/drivers where the problem exists, but in case it is way more than what can be explained by a float precision problem (if that were the case it would be at least roughly near the wall).

Having access to a machine that shows the problem would help, I will try to see if some of the users can help me.

heppe: my engine (Cube) uses Z for up/down and (X,Y) for the ground plane… I solved this by simply swapping all Y and Z to OpenGL calls (retarded, maybe, but it works fine).

Relic: I could try that, but doesn’t it mean that I have to be aware of different Z buffer formats, at least 16/24/32 and whatever cards may come up with. Right now the code is completely depth size agnostic, and I would like to keep it that way… would GL_DOUBLE help?

Thanks for all the help sofar!

I could try that, but doesn’t it mean that I have to be aware of different Z buffer formats, at least 16/24/32 and whatever cards may come up with. Right now the code is completely depth size agnostic, and I would like to keep it that way… would GL_DOUBLE help?

It’s as easy as glGetIntegerv(GL_DEPTH_BITS, &depthBits); to make it aware.

No, double isn’t any better here.

I just wanted to point out that your OpenGL implementations which don’t show the correct result may have a bug in their int to float conversion, like, uhm, completely fogotten to do it for example.

It’s your chance to test that with the GL_UNSIGNED_INT readback and do-it-yourself scale to 1.0 to 0.0 range. Using doubles for that calculation might help to increase precision.

So, I wrote a little glut based test program that simply displayed a single rectangle at the center of the window. Using two keys I could move the “object” forwards and backwards by modifying the modelview matrix. When I calculated the “hit” position as in your code, I always got a value very close to 0. Basically within +/- 1-bit of depthbuffer resolution.

However, when I changed the code to use an identity matrix for mm I got the correct values. Depending on what exactly you do with your modelview matrix, you will want to do some variation of that. Giving the correct modelview matrix to gluUnProject will give you back values in model space, aparently. I suspect you want values in world space.

relic: thanks, I will give that a go once I have some hw to test on. But I sincerely doubt that a int/float conversion is what is going wrong in the drivers. Most cards nowadays actually use float depth buffers, so no conversion would happen anyway.

idr: that is interesting… when you say “close to 0” do you mean the raw depth value, or the coordinates you get out of gluUnProject()?

If what you say about the modelview matrix is correct, it would appear that these drivers have buggy gluUnProject() implementations, which would be weird as this entirely a software implementation, meaning that probably nothing has changed about it since they licensed it from SGI (or copied from Mesa, whichever

My modelview matrix is something along the lines of:

glLoadIdentity();
glRotated(player1->roll,0.0,0.0,1.0);
glRotated(player1->pitch,-1.0,0.0,0.0);
glRotated(player1->yaw,0.0,1.0,0.0);
glTranslated(-player1->o.x, -player1->o.z, -player1->o.y);

What card/drivers/OS do you use? If you feel like helping out, can you download Cube (cubeengine.com), and tell me if rockets explode in your face when you fire them?

>>Most cards nowadays actually use float depth buffers, so no conversion would happen anyway.<<

That’s simply a false assumption.