What exactly is the up vector that needs to be specified here? At first I was thinking that I had to manually specify the up orientation of the camera. I just couldn’t find any difference from that than when i just used <0,1,0> always. So basically:
is the up-vector the camera up, or the world up, and then gluLookAt calculates the camera up for me?
Think about it. gluLookAt creates a matrix based on the position and orientation of the camera in the world.
To specify a unique position/orientation for a world requires 4 vectors (much like a 4x4 transformation matrix): position, the front direction, the up direction, and the left (or right) direction. Because the world is a right-handed coordinate system by default, you can assume what the left-right direction is based on the world (ie, the camera will not suddenly transform the world into a left-handed coordinate system). Therefore, all the camera needs is a position, up, and front.
Note that these vectors need not be exactly orthogonal (ie, your front and up vectors don’t have to be perpundicular). The only limitation is that the front and up vectors cannot be along the same line.
So, yes, the up vector is the camera’s up direction.
Ok, that is what I was thinking originally, but a piece of code in an OGL tutorial through me off. The way you explained it makes sense. That leads to my next question: Actually finding this up vector. I want it to be a true perpendicular vector to LookAt-Eye, but there are two ways that I have seen to do this. First is the way I first thought I should use:
Basically the Eye LUV setup
l = (lookat-eye)/| |(lookat-eye)| |
up = world up
v = (l cross up)/| |(l cross up)| |
u = (v cross l)
where u is the camera up.
Then someone mentioned this way of set up:
then, swap two of the values, negate one, and then this vector would be the camera up. I would normalize it, of course.
I tried to think of situations where the second method would break down, but I wasn’t able to come up with any on my own. If anyone has experience with this method, knows any shortfalls, that would be great. Otherwise, it obviously would be the way to go, because it saves quite a bit of computation.
Figured it out, but just in case anyone else has the same problem in the future…
The method used that just involves swapping the two values of the vector and negating one does always generate an upvector perpendicular to the LookAt-Eye vector, but there is no control over maintaining the correct orientation. The camera will sometimes be sideways, or upside down. Therefore, Eye LUV is the way to go. If you add extra logic to the second method you could probably get it to always be oriented properly, but that would probably end up being just as computationally expensive as the first method. If someone wants to prove me wrong, have fun. It just isn’t worth my time to try to figure it out.