# coordinates

Hello,

in the program I am working on, the vertex coordinates of the geometrical data are in a quite normal coordinate system where x, y and z are floating point values, for example, the terrain goes from x=0 to x=2000, from z=0 to z=2000 and y (height) from y=0 to y=200.

But for rendering, the z coordinates are multiplied by -1, due to the assumption that the z axis points in the negative direction. This leads to some confusion as every z coordinate has to be multiplied by -1, making the progam quite confused as for different parts of the scene, this happens at different stages. I did not design this but I am working on this program’s further development.

My (probably stupid) question is, is this really neccessary? Or were the peope who designed this mistaken? Or, what I can imagine, you can also use z coordinates the way they are, but then everything would look mirror-inverted? Ins’t is possible to configure viewport and perspective that it looks OK without having to invert any coordinates?

Jan

Why not just issue a “camera” transformation to look down the positive Z-axis before drawing? You don’t have to accept the default camera orientation.

the program uses gluLookAt, but I fear that if I change the z coordinate inversion, everything gets mirror inverted. I may be wrong but I simply would like to know before I start working on this. My thought is, if you look down the z axis in negative direction, the x values get smaller to the left and larger to the right. If i look in the opposite direction AND invert the z values (compared to before, so this time, in fact, NOT inverting them), everything might look mirror inverted. I find this quite hard to imagine but it seems logical to me.

in fact, the view should be configured so that the z axis points in positive direction (the larger the value, the farther away), and the x axis points to the right. but if you look down the z axis in positive direction, the x axis I think will point to the left. At least, I fear so.

Yeah, I have the same problem. gluLookAt function works like a mirror, I don’t know why, so now I’m thinking of constructing camera class without it (only glRotatef and glTranslatef).

I don’t know what the original authors intended, but I would get rid of the multiplications by -1 and leave the vertices alone. Then I would try each of the following and see what the terrain looks like:

a) Back up the camera 2000 units. Now you’re looking at the scene down the negative z-axis from z = 2000 and as you move forward, the z values will be decreasing.

b) Rotate the camera around the y-axis 180 degrees like I suggested in my first reply. Now you’re looking at the scene down the positive z-axis from the origin and as you move forward, the z values will be increasing.

Thank you for your suggestions… I think b would be appropriate. I will try it and then post if it works, although this will take some time.

Jan

Originally posted by JanHH:

My (probably stupid) question is, is this really neccessary? Or were the peope who designed this mistaken? Or, what I can imagine, you can also use z coordinates the way they are, but then everything would look mirror-inverted? Ins’t is possible to configure viewport and perspective that it looks OK without having to invert any coordinates?
Jan

The problem you are facing is that your application and OpenGL use different coordinate systems. While OpenGL uses a right-hand system (the thumb of your right hand points down the positive x-axis, the index finger the positive y-axis and the middle finger the positive z-axis, hope you get it) your application uses a left-hand system. Transforms of your camera won’t help you in this situation as the camera is defined in OpenGLs coordinate system. There is not much you can do about this. For rendering the left hand coordinates have to be transformed to right hand coordinates and that is what multiplying all z values by -1 does. I don’t think that there is another solution.

hih

[This message has been edited by satan (edited 07-29-2003).]

but isn’t it weird that OpenGL does so, as the “normal” coordinate system one learns in school, math, is a left hand one?

But at least, know I know that I was not wrong Thanks
Jan

Originally posted by JanHH:
[b]but isn’t it weird that OpenGL does so, as the “normal” coordinate system one learns in school, math, is a left hand one?

But at least, know I know that I was not wrong Thanks
Jan[/b]

The ‘normal’ coordinate system you learn in german schools and universities is a right hand one. So I think it is not weird but just
a convention. Btw remember that your coordinate system also determines the orientation of your crossproduct.

Instead of multiplying all your z coordinates by -1 you can simply apply that multiply to the modelview matrix. This means that the multiply of -1 is factored into the matrix instead of separate operations. Thus all your coordinates would be automatically converted. You would simply multiply the modelview matrix with:
1 0 0 0
0 1 0 0
0 0 -1 0
0 0 0 1

Actually, since the order of multiplication is important, this must be done first before any rotations. This then would be the “identity matrix” you would be using. Simply load this instead of the identity matrix with the glLoadMatrix command.

You can use this to convert any type of coordinates. In quake3, for example, +z points up and +y points away from the viewer. The “identity matrix” would then be:
1 0 0 0
0 0 1 0
0 -1 0 0
0 0 0 1

as Confusion said, a glScale(1.0,1.0,-1.0) before any modelview (after glLoadIdentity())operation should work.

Of course Confusion and yoyo are right, I think I just should not post when I am to tired to think.

Thanks yoyo for pointing out the obvious – and doing it in a nice way too I too was posting while half asleep. Long ago, I wrote my own matrix class and thus forgot about glScale – which would work in the case of a simple z axis inversion.

The point I was making though is that any common operation you do to preprocess a vertex can usually be combined into a matrix. This matrix would be used whenever you perform a glLoadIdentity on the modelview matrix. This includes scaling, translation, and even axis swapping. This is obviously a time saver since the preprocessing operations only have to be done once on the matrix rather than on every vertex passed in. It also has the added benefit of letting you work on the vertex data in the format they were created in.

Don’t be afraid I’m using scalef for all my reflection stuff where possible, only thing to watch out is face culling (CCW vs. CW) as they are inverted for negative scale values.

I’m not very good at the matrix stuff, so let me ask again:

The program uses gluLookAt in projection matrx mode (1. set matrix mode 2. load identify 3. gluLookAt). This isn’t modelview projection, right?

After that, theres changing the matrix mode to modelview, load identitiy and then draw, with a gltranslate and/or rotate for some objects. And the solution would be to call the glScale-command after glLoadIdentiy in modelview mode?

Thanks
Jan

Why are you using gluLookAt() in projection mode? This can cause problems with lighting, fog, and textures. Check out OpenGL FAQ 8.030:

http://opengl.org/developers/faqs/technical/viewing.htm#view0030

No modelview and projection are two diffrent things.

No your gluLookAt should not be called in the projection matrix, only the model matrix.

Remember it the camera in openGL does not movie we make it apear to move by moving the world around the camera. You can get the same effect as gluLookAT with a few glRotate and glTranslate commands.

Think of projection matrix as your camera lense settings.

Model matrix is the were objects are in the world, you camera location is treated like an object in the world. gluLookAt is make up of Rotations and Translations to move the camera to some point in the world.

//Render example
//Projection matix
glMatixMode(GL_PROJECTION); Tell’s openGL next matrix operation will be on the projection matrix.
glLoadIdentity(); Let’s clear the matrix, this way we know the current matrix state, else we could be just adding to a past matrix and end up at a unknow matrix settings.

glOrtho(-30.0, 30.0, -30.0, 30.0, -30.0, 30.0); // We set the area that will be rendered.

glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); Again we clear the matrix, this time the modelview matrix.
gluLookAt---- would be called here since is will cause all models to drawn relative to it settings.

Now we are ready to start drawing stuff.

Originally posted by JanHH:
[b]I’m not very good at the matrix stuff, so let me ask again:

The program uses gluLookAt in projection matrx mode (1. set matrix mode 2. load identify 3. gluLookAt). This isn’t modelview projection, right?

After that, theres changing the matrix mode to modelview, load identitiy and then draw, with a gltranslate and/or rotate for some objects. And the solution would be to call the glScale-command after glLoadIdentiy in modelview mode?

Thanks
Jan[/b]

First the guys above are right about glulookat belonging to modelview transformations.
Second I personally would call glscale just before drawing and after all rotations. It works as I tried it in my engine. It should also work if you call it directly after loading the identity matrix but it will affect your rotations. And remeber that you have to change the winding of your polygons (glFrontFace).

[This message has been edited by satan (edited 07-30-2003).]

Sorry I was wrong… of course it’s

glMatrixMode(GL_PROJECTION)
glPerspective(…)
glMatrixMode(GL_MODELVIEW)
gluLookAt(…)

and after that, glScale(…)? So for example, when I am standig at 100/200/300 in “my” coordinate system, which would be 100/200/-300 in OpenGL’s coordinate system, I need to call gluLookAt with the -300, not 300, right? and after that, NOT multiplay the z coordinates with -1 BUT use the glScale(…) after gluLookAt. Am I now getting it?

Sorry for these questions…
Jan