# gluUnProject question

Hi to all!

Suppose that you have a point (x,y,z) in object coordinates.

Opengl trasform it in the usual, using the modelview matrix, the projection matrix and finally mapping it to a screen pixel.

Suppose that the coordinates of this pixel are (xs, ys).

Now, i want to get the original object coordinates of the pixel (that is, i want (x,y,z)).

Is there a way to do this (using gluUnProject or another method)???

Ps.
I ask this cause not always i will have the (x,y,z), but i need to recover the original object coordinates using only (xs,ys)…

Pss.
I want to say more. I have a quad textured and rendered on screen after having rotated and translated it. Suppose that i select a pixel from the frame buffer of coordinates (xs,ys). This pixel could be the location of a corner of the texture, computed using a corner detection algorithm. Now, i want the coordinates of this corner in object space. In this way, i can compute the coordinates of the (frame buffer) pixel relative to the original texture…

Use gluUnProject for that. You will need screen X, scree Y and screen Depth. Without depth you cant get correct 3d coordinate.

If object is additionally transformed by vertexshader (for example skinning or instancing) you must write your own inverse transformation function.

Hi,

how do i get screenZ? I was trying to do the following thing:

GLdouble mvMat[16], prjMat[16];
GLint viewport[4];
double x1, y1, z1;

glGetDoublev(GL_MODELVIEW_MATRIX, mvMat);
glGetDoublev(GL_PROJECTION_MATRIX, prjMat);
glGetIntegerv(GL_VIEWPORT, viewport);

// the frame buffer and depth buffer are WxH in size
glReadPixels(0, 0, W, H, GL_DEPTH_COMPONENT, GL_FLOAT, depthBuffer);

gluUnProject(
screenX,
viewport[3] - 1 - screenY,
(double)depthBuffer[screenY * W + screenX],
mvMat,
prjMat,
viewport,
&x1, &y1, &z1);

Unfortunately, it does not work.

I will explain in detail what i am doing.
Suppose that in the original texture i select the pixel at coordinates (xt, yt). I tranform this coordinates in the following way:

x1 = (((float)xt - (w - 1.0)/2.0)/(w - 1.0)) * 2 * s;
y1 = -(((float)yt - (h - 1.0)/2.0)/(h - 1.0)) * 2;

where w is the width of the texture, h is the height of the
texture and s is the aspect ratio. Now, the pixel (xt,yt) is tranformed in the point (x1,y1,0) which are its object coordinates (i put at 0 the z-coordinate). OpenGL transform the point (x1,y1,0) with modelview matrix, projection matrix and finally map it to the screen to (screenX, screenY).

When i use gluUnProject() with the coordinates (screenX, screenY, screenZ), where screenZ is its depth value read from the depth buffer, i don’t get (x1,y1,0) but other values…

I am doing something wrong??

When you use glReadPixels, just read back the single pixel you want to read. Print that out to make sure it’s being read correctly. The numbers should be fairly intuitive (1 if you miss it, not 1 if you hit it). If you angle the face away from you, you should be able to see the number increase as you click farther down the polygon. If you get that working, gluUnproject should work. Why, by the way, are you flipping your y over? Which API are you using that gives the mouse position not matching OpenGL’s?

Hi,

it does seems to work now. First, i use

glReadPixels(winX, viewport[3] - 1 - winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);

I used GL_DOUBLE, with a double winz, but it didn’t work (why???).

Than, i simply use (winX, winY, winZ) with gluUnProject() and
transform back to texture coordinates.

Thanks to all of you for your help.

Luca

Ps.
Actually, instead of using a call to glReadPixels() for all the pixels selected (between 200 and 1000), i use one call to glReadPixels() to read the entire depth buffer. It seems a lot faster…

Were you doing multiple calls to glReadPixels for pixels between 200 and 1000 or just one? For one glReadPixels call, its hard to beleive that reading the entire buffer would be faster than reading a portion.