I’m trying to implement a camera model. It shouldn’t be first or thrid-person perspective specific because I’m planning on using both so I’m trying to keep it as generic as possible.
So this is what I’m thinking:
The camera is essentially a coordiante system described by a matrix,the camera matrix. By multiplying this matrix to the modelview matrix you should be able to map the world coord. sytem to the camera coord. system. My goal is to create my camera model in such a way that I can make arbitary rotations and translations in the camera’s coordinate system,just like you do with glRotate and glTranslate in the world’s coordinate system.
So I was thinking of using a 3x3 matrix to represent camera rotation/orientation and a 3-vector to implement translation/position.When I want to create a 4x4 homogenous matrix for use mith glMultMatrixf I just create 4x4 rotation matrix out of the camera’s matrix and a 4x4 matrix for translation out of the 3-vector. I then multiply the rotation matrix by the translation matrix and I’m done. Yet if I mutiply a rotation matrix by the camera’s 3x3 matrix I get a rotation around the world coordinate system translated to the camera position. I want to be able to rotate around the camera’s coord. system so I’m doing something wrong. The translation also takes place around the worlds coord. system but that’s ok since the camera position specified by the 3-vector is supposed to be in world coordinates.

a)Does any of the above make sense?
b)Does anyone know of a way of implementing a camera model to achieve what I want?
Thanx in advance.

Treat your “camera model” as the root transform of your scene. Have your hierarchy as you probably already have, and keep a copy of your “camera matrix” on hand to concat at the root level.

Also keep in mind that your “camera matrix” is just another 4x4 homogenious xform, BUT it should apply an inverse transformation to any hierarchy that it is concatenated with. So you could do any of the following:

apply transformations to the camera matrix like a normal matrix and then invert it, keep the inverted matrix for use when concatenating with a hierarchy

have a class or set of routines (if you’re in plain C) that you perform all your camera transformations with, but when doing a TranslateCamera( x, y, z ) you actually apply a {-x, -y, -z} translation inside that routine; that way your camera matrix is already inverted and you can simply use it as is when concatenating with a hierarchy. Do the same with your rotation and scale operations and that’s it.

I’ve done both in the past, but these daze I prefer a LookAt camera model where I calculate the camera matrix based upon a camera position and a look-at position. If you’re interested in that, track down the MESA source and you can use the LookAt routine supplied there. (Actually, there may be one inside the glu library… I forget, 'cause its been so long ago that I grabbed the LookAt source…)

Thanx for helping out blake,
but you see,my problem isn’t that rotations are carried out the other way round(like when I press left the cam. rotates right) but that the seem to take place around a coord. system identical to the world’s coord. system translated to the cam position.Translation also take place according to that system but that’s ok since I plan on using world coords to trnaslate.So if I want to translate in cam. coords(which should happen most of the time) I will convert these values to world coords.(somehow)and translate by these.
That part about the LookAt vector model looks quite instresting though.I’m currently implementing a terrain rerndering engine(using the roam algorithm) which I eventually hope to use in a game(if the university leaves me with any spare time),but I’m thinking on keeping my goals small for now so I’m planning on coding a demo first with some terrain and the camera flying through it randomly(sort of like in the unreal tournament intro).Would the model you proposed be suitable for this?