why use a camera

i was wondering if you could just clear something up for me? There are lots of posts about using a camera (or at least simulating one with gluLookAt). Why is it better to use some kind of camera routine instead of just using glRotate but in the opposite direction to the way you want to turn? Is it faster to re-calculate the projection than to re-calculate the positions of the objects?


no just easier for dumb human to understand

There’s no camera in OpenGL, so you must simulate one. And this is done by changing the modelview matrix with glTranslate and glRotate.

So to comment this: Why is it better to use some kind of camera routine instead of just using glRotate but in the opposite direction to the way you want to turn?

To use glRotate and glTranslate is THE way to do. This is how gluLookAt works. As far as I know, there’s no other way to go (there might be similar ways, but I’m pretty sure you will end up in the same way anyways).

And to say another thing about projection and objectposition. If you refer to projection as GL_PROJECTION, I have to say: Initialize it when you start, then DON’T TOUCH IT. All “camera” movements is supposed to be in GL_MODELVIEW.

[This message has been edited by Bob (edited 07-10-2000).]

yeah… but shouldn’t gluLookAt() be faster? I haven’t looked at the glu source, but if it derives a one matrix and mutliplies that matrix by the topmost MODELVIEW matrix to get the desired world. Instead of mutliplying 4-5 matricies again and again…
Keep in mind that matrix multiplication is very intensive!! So you want to keep the number of matricies you multiply to a minimum.

Matrixmultiplications is not that demanding. Remember that OpenGL performs at leats two matrixmultiplications per vertex passed (modelview and projection). And movement is only performed once per frame (unless you have more than one “camera” that needs to be transformed). So if you have 20k vertices running at 20 fps, you will have 80k matrixmultiplications just for the vertices per second, and only 20 multiplications for viewpoint. And if you pass vertexnormals and multitexturecoordinates, you will easily double or even tripple this number.

And if your scene your has more vertices, and is running faster… well… even more multiplications per second.

So don’t worry about how fast/slow gluLookAt is, there are ALOT more thing that need ALOT more attention to make your application run faster.

So what you all saying is that there is no point in creating a camera class - or anythng like that - other than to make the program seem more logical.

i guess it would still be better to have a camera class to make life simple.

but there would be no difference between using gluLookAt or Rotate/Translate functions to implement it?


The reason why you create a cameraclass is to , as you say, make the code easier to read and to understand.
If your class have two points in worldspace, one for camerapoint and one for cameratarget, you may want to use gluLookAt, since you pass position and targetpoints. If your cameraclass has one point for position, and three angles to define the direction you are looking at, you may want to use glTranslate and glRotate.

What way you choose to use, is all up to what inforation you store in your class and how you want it to work. But in the end, you will always end up with altering the world origin.

I find that working out the up vector or glulookat is painfully un-intuitive so I use the following code to direct the camera:

a = cam->vn[0];
b = cam->vn[1];
c = cam->vn[2];

memset( M, 0, 16*sizeof(float) );
M[0] = -c;
M[8] = a;
M[1] = -a*b;
M[5] = a*a+c*c;
M[9] = -b*c;
M[2] = -a;
M[6] = -b;
M[10] = -c;
M[15] = 1.0f;

glRotatef( cam->angle, 0.0f,0.0f,1.0f );

where cam->angle is the angle the camera is rotated about its view normal (cam->vn). By the way gluLookat does the whole thing with glMultMatrixf but the setup of M is more complex anyway so speed is not really an issue.

I agree that gluLookAt’s upvector can be a pain. It should form a 90 degree angle with the viewdirection, but this is no rule. As long as it’s pointing in the “more or less correct” direction, it will work anyway. So if you got for example a not-so-bumpy landscape, where your camera is hovering above the ground, and don’t look up/down too much, you can always set the upvector to {0,1,0}. This is quite good. But if you want completlely freedom with your camera, you must either use your own routine, or calculate a proper upvector.