?? recovering Pixel from world coordinates

Hello Users,
dunnow if this is the proper forum but this is the problem I have:

I have a set of 3D points in world coordinate.
I have defined this perspective view:
gluPerspective(60, 1.309, 0.00001, 1000000.0);
with a viewport of 872 x 666 pixel wide.

my dataset is visualized properly. I want to recover they position in PIXEL so that I used the glGetFloatv(GL_PROJECTION_MATRIX,…) and glGetFloatv(GL_MODELVIEW_MATRIX,…) to ricover the projection and modelview and simulating the OpenGL pipeline (following the specification). Now, I don’t get back the pixel points…because most of them are negative or and not in the screen size.
So give an PPw (point3D in worldCoordinate) I muliply this point as it follows to get the clipped version:

PPc = projection * modelview * PPw;

is that enough?!

This is the dataset:
{ 0, 0, 0,
-10, -10, -10,
0, -10, -10,
10, -10, -10,
-10, -10, 0,
0, -10, 0,
10, -10, 0,
-10, -10, 10,
0, -10, 10,
10, -10, 10,
-10, 0, 10,
0, 0, 10,
10, 0, 10,
-10, 10, 10,
0, 10, 10,
10, 10, 10,
10, 0, -10,
10, 0, 0,
10,10, -10,
10, 10, 0};

Thank you so much!!

To get screen coordinates, you have to apply the screen transformation. See the spec at the transformation chapter.

After applying the projection transformation and perspective division, coordinates are in the [-1 1] interval.

Sorry to bother you but my modelView matrix is the Identity so…no transformation is applied to the object.
I recover the projection matrix P and the I apply it to the set of points which are in the form (x,y,z,1). so I get this weird set(I just show the first three points):

0 0 -2e-005 0
-13.2285798 -17.3203015 9.99998 10
0 -17.3203015 9.99998 10

where I suppose the 4th column is the normalization factor W. but there are clearly division by 0 which cases a NaN error as well as 0 divided by something which causes 0…
This is the P I get from OpenGL
1.32285798 0 0 0
0 1.73203015 0 0
0 0 -1 -2e-005
0 0 -1 0

weird results.
Thanks for the help!

The problem is not that modelview matrix is identity. What I meant is that you need to apply the viewport transformation after perspective division, to get coordinates in pixels.

Points with a z coordinate equals to zero mean that they are located at eye the position, so they are not visible and you should discard them before perspective division.

BTW, you may look at the glProject documentation which performs what you are trying to do.

Something may also be wrong in your matrix multiplications. Note that Opengl stores matrices using the column major order convention.

Further, no transformation is applied to the eye. That is, the eye is at the origin looking down the -Z axis, not just in eye space, but world space and object space too, since they’re the same, given your identify MODELVIEW! (remember: MODELVIEW = MODELING and VIEWING transforms combined).

I recover the projection matrix P and the I apply it to the set of points which are in the form (x,y,z,1). so I get this weird set(I just show the first three points):

0 0 -2e-005 0
-13.2285798 -17.3203015 9.99998 10
0 -17.3203015 9.99998 10

where I suppose the 4th column is the normalization factor W. but there are clearly division by 0 which cases a NaN error as well as 0 divided by something which causes 0…

Well I could easily believe the first point. Think about it. You’re feeding it a vertex position of 0,0,0,1 (the origin in object space; ring a bell?). It’s being transformed by the identity MODELVIEW matrix. So it’s still 0,0,0,1 (eye-space origin). Your eye is at 0,0,0,1 (the origin). So this point is located at the eye, behind the near clip plane. It’s gonna be clipped.

This is why clipping in the GPU takes place in homogeneous (4-vector) space before the perspective divide. No nasty divide-by-zero singularities to deal with.

This is the P I get from OpenGL
1.32285798 0 0 0
0 1.73203015 0 0
0 0 -1 -2e-005
0 0 -1 0

You can verify this is the right matrix. See: this gluPerspective man page, which contains the formula for the projection matrix it generates. Plug in your numbers.

FWIW, this is a specific case of the more general glFrustum perspective matrix (glFrustum man page). You can also find this in Appendix F of the Red Book.

You can then do clipping on your homogeneous points. Basically throw out all of them where this is not true: -|w| <= x,y,z <= |w|. The rest are in-frustum, and you do the perspective divide on them.

Also note that your near clip plane value of 0.00001 is going to result in very little Z precision available except for eye-space Z values really close to this value. In general, you want to push this near clip plane out as far as you can stand so you have plenty of Z precision for the whole scene.

I do thank you fro the helps …but sorry this is my fault, I forgot about updating you about the gluLookAt I define. basically the camera is not in the origin but in this position:

gluLookAt(-90.0455 30.05 -29.98, -0.02, 0.03, 0.03, 0.4, 0.8, 0.4);
So the eye is toward in the negative z and looking towards the orgin where the set of points are.

sorry for my ingnorance again, but when u guys talk about the “perspective division” do you mean:

(x,y,z,w) = (x/w, y/w, z/w,1) ?

Thanks.
Giancarlo

sorry for my ingnorance again, but when u guys talk about the “perspective division” do you mean:

(x,y,z,w) = (x/w, y/w, z/w,1) ?

Yes.

I do thank you fro the helps …but sorry this is my fault, I forgot about updating you about the gluLookAt I define. basically the camera is not in the origin but in this position:

gluLookAt(-90.0455 30.05 -29.98, -0.02, 0.03, 0.03, 0.4, 0.8, 0.4);
So the eye is toward in the negative z and looking towards the orgin where the set of points are.

In this case, the modelview matrix is no longer equal to identity and you must take account of this transformation.

Ohhh geeezzzzzzzzz ur right! just because I’ve put the glGetFloatv one line before the glulookat statement…I was always getting the identity instead of the real modelview!

ok…mmmm although a slight difference it didn’t change so much.
I have these points:
the ones I pass to glVertex3f
0 0 0 1
-10 -10 -10 1
0 -10 -10 1
10 -10 -10 1
-10 -10 0 1
0 -10 0 1

I know that the correspondent pixels if I take a screen capture of the rendering window are:
436 333
436 129
357 197
294 251
514 197
436 251

the parameters I pass as input to the Gluperspective verify the gluPerspective formulas.
So I want to obtain the same pixels picked by mouse, appling the openGl pipeline:
Pixel = V * P * M * pp_w (P projection, M modelview, V viewport);
I call the gluGetFloatv to retrieve the three matrices. Does it make any difference if I call the glGetFlaotV(GL_PROJECTION_MATRIX,…) out of the glMatrixMode(GL_PROJECTION)?
I get some nasty division by zero and moreover I don’t get the same pixels (I know that openGL the oriding of the pixel is bottom left while in the image processing software is upper left, right?)

Thanks
Giancarlo

Does it make any difference if I call the glGetFlaotV(GL_PROJECTION_MATRIX,…) out of the glMatrixMode(GL_PROJECTION)?

No it does not make any difference.

The problem seems to be in your matrix multiplications. In your matrix multiplication function do you use row or column major order convention? You may have to transpose the matrices you get to perform the correct transformation.

well this is the code I use to transform the float[16] vector into a 4x4 matrix:
int j = 0;
for(int i = 0; i<4; i++) {
j = 0;
for(int r = i; r<16; r = r+4){
//cout << mat2[r] << " ";
modelView[i][j] = model[r];
j++;
}

and then I make a row order multiplication. But I’m double checking also the other way.
Cheers.
Giancarlo

I was wondering if…the thing I’m trying to do isn’t already available using gluProject and gluUnProject functions.

Guys this last message to thank you all for the help! I sorted out my problem now and everything works fine!

Many thanks for all your suggestions!.
Giancarlo