For some reason, rotating my camera around the X and Y axis leads to strange behaviors: everything seems to work fine at the beginning, but it starts to act oddly very soon. Instead, it works perfectly if I limit the rotation around only one of the two axis.
I think the problem is somehow related to ‘z_look_at’, but I have no idea how to solve it. Any suggestion?
[li] The code you posted doesn’t use angleYZ at all, it only covers movement in the X-Z plane and rotation about the Y axis. If you’re only experiencing problems when angleYZ is non-zero, it would help to post code where angleYZ is actually used.
[/li][li] gluLookAt() is the wrong tool for this control model. Just use glTranslate and glRotate instead.
[/li][li] Making the ground plane X-Z rather than X-Y is silly (but unfortunately common; for some reason I’ve never managed to figure out, people just seem to assume that the default view direction “must” be North).
You’re right… I’ve just figured out I missed all the YZ-related code when copy-pasting. It seems that everyone agrees on gluLookAt being the worst way to carry out the task, so I decided to switch to the glRotate/glTranslate solution. Although, the camera I’m trying to implement is a sort of fly-by camera, “locked” on an object that moves inside the space but keeps always the same (x,y) viewport coordinates (something like GTA’s camera, where the character is always at the center of the screen - a part from several specific cases - and the camera “follows” him). Is it possible to easily write code for this type of camera using glTranslate/glRotate? I have a couple ideas in mind about how to do that but I’m wondering if there’s something I need to know before I trash all the code I have already written.
For what concerns the XZ plane being more “newbie-friendly” than the XY one, I think it’s due to that fact that people learn planar geometry first and then move on and add the missing dimension. What you’re looking for when first dealing with 3D is a plane, “a floor”, and it is easy to pick the XZ as your new default plane since it is always drawn as the horizontal plane in examples.
No. For that case, you should use gluLookAt(). But then you’d be using the character position as the look-at target, and the camera position would be made to follow it. Changing the player’s direction would leave their position (the look-at target) unchanged.
You could either rotate the camera around the players position (i.e. the opposite of what you have now) for a rigid over-the-shoulder view, or have the player drag the camera (when the player changes direction, don’t change the camera position; when the player moves, move the camera toward or away from the player to maintain a given distance), or something else.
That’s why you choose XY as the ground plane. Reality doesn’t have symmetry between axes; it has two horizontal axes forming a horizontal plane, and one vertical axis. There is no distinguished vertical plane (there are infinitely many, one for each compass heading). The 2D plane is the horizontal plane (i.e. the plan view), the “missing” dimension is the vertical.
Apart from the conceptual issues, there are some practical ones. Even when dealing with 3D, there are often things which require 2D plan-view coordinates (e.g. a map as an aid to navigation). If you use XY for horizontal and Z for vertical, you can pass a pointer to a 3D point to something expecting a pointer to a (plan-view) 2D point and it will work (it will just ignore the Z coordinate). If you have an array of 3D coordinates, you can treat it as an array of 2D coordinates with e.g. glVertexAttribPointer simply by specifying the stride. This won’t work with Y-vertical unless you happen to need a north-facing view (or an east-facing view rotated through 90 degrees, which is even less likely).
Ok, so before reading your post I gave it a try anyway and this new solution seemed to work nice and easy, but soon I discovered the challenging bit of it. This is my code at the moment (obviously only the important chunks).
Well, it doesn’t work and I’m having a tough time trying to figure out what’s wrong.
What if I don’t want the vector specified in glRotate to “originate” in (0,0,0) but somewhere else? That would do the trick and I could avoid all those sin/cos calculations. Is there a way to do that?
If you need to do anything other than graphics (e.g. collision detection), you are going to need to maintain the transformations within the program. This is a large part of the reason why all of the matrix operations were deprecated in OpenGL 3: they’re of no use for anything but the most trivial cases. Programs which need to do anything with the geometry besides rendering it just construct the matrices locally and transfer the matrices with glLoadMatrix (legacy OpenGL) or glUniformMatrix() etc (modern OpenGL).
Also, implementing a control model using OpenGL matrix operations means that the matrix has to persist; you can’t construct it from scratch each frame starting with glLoadIdentity(). This means that you’ll only ever be able to maintain one such transformation. You can use other transformations temporarily by saving the player transformation on the matrix stack. But you’ll have to discard any such transformations to restore the player transformation, so you can’t use the same technique for anything else, e.g. AI-controlled entities.
So, I’d suggest that you forget about the OpenGL matrix operations and just use GLM instead. Or write your own; the matrices generated by the OpenGL functions are documented in their respective manual pages.