OK … I’m using gluLookAt as the core of the camera … I thought it was all going to be simple and wanted to avoid my quaternion based camera class that I developed when I did manage to get my head around such abstract maths.

I’ve got the following vectors Veye - the camera position, Vtar - the target and Vnew - a new target. I need to slerp between Vtar and Vnew and can’t get my head around how to do it.

I can get the angle between Vtar and Vnew, so how do I rotate Vtar through an angular increment towards Vnew ? … It gets more complicated as Vnew is a moving target.

So long as the angle is relatively small, you should be able to just linearly interpolate the view vectors (center - eye). Naturally, watch out for boundary conditions.

I would also add that you might want to experiment with not using evenly spaced increments.

I once did a viewer that would change views by going in percentages. For example, go [x]% on each render. There would always be a “current camera angle” and a “desired camera angle”.

Then, all the camera-changing code would just read and write to the desired_camera_angle vector. The view would just follow along.

There was no need to keep track of what step I was on in some kind of animation sequence. It just always interpolated the two vectors. Even if the vectors were the same.

This causes the camera to move swiftly at first, then slow down and smooth out as it reaches it’s target. I would use the current frames-per-second I was getting to determine what percentage to use.

Thanks for the feedback … I do have the basis of an interpolation routine working. I like the idea of slowing down the camera as it nears the target … but my rotation routines are not correct … at first I thought that the camera was tracking an odd path due to the fact that the target is moving but I realize that the rotation is not correct.

What I’d really like is an example of how to achieve rotation from the target to the new target. I’m fighting with the belief that it is becoming more likely that it will involve quaternions and I think that my ‘odd’ tracking path might be originating from the euler based maths.

Finally, set the new vector as the current vector. Then setup your camera and perform the render pass.

If percentage is set to 0.5, for example, then each time the program renders, it will orient the camera to point at some location that is halfway between where it was pointing and where you want it to point.

If the program is rendering pretty fast (60 fps) then 0.5 might be too fast a percentage. Something like 0.01 might be better. You have to tweak it to your tastes. Also, you need to adjust it based on how fast it renders. You wouldn’t want the program to behave differently on a machine that gets 20 fps than it would on a machine that gets 100 fps.

Also, as Lindley pointed out: There will be boundry cases to keep in mind. If the camera is currently looking down the positive X axis, and you want it to pan over and look at an object along the negative X axis. (180 degree change) you will run into a boundry case where 0,0,0 is a possible interpolated result in between those two vectors. You wouldn’t want to set your camera’s “at” vector to 0,0,0. It might freak out since that vector has a length of zero and an undefined angle.

Hey thanks … I’d actually tried linear interpolation before but made a mistake in forgetting to normalize the vectors and ended up interpolating the actual distance.

I think that quaternions are the way to really go … I’d like to avoid the camera pointing downwards as it tracks between targets. I remember the pain of developing in quaternions before and it’d be useful to read up more on their use and implementation. Implementation and examples are key and I’ve not found much in the way of practical examples but lots in the way of deep mathematical studies … are there any good books on the subject with ‘real-life’ examples ?

Don’t rotate the vector, just interpolate it:
You will need to normalize this vector, as the result of LERPing two vectors is hardly guaranteed to be normalized.

Hope I’m not too off topic … besides the camera angles, I’m needing something that also does the image processing … say a contrast tracker … for example, an algorithm that would track “hot” pixels in a FLIR sensor scene.

Does anyone have a very simplistic implementation of such an algorithm? I’m trying to implement one in GLSL.

I have to look up quaternions again every time I use them. However, the formulas really aren’t that complicated. Just find a site that lays them out and plug in the values.

Take the cross product of veye and vtar and glRotate gradually about that vector, a simple linear rotate of some interpolation fraction of acos of the dot product of the same two input vectors would give you a slerp, but you may want to weight that in and out depending on circumstances.

Obviously the glRotate would operate directly on the matrix stack so the equivalent operation in your favorite matrix & vector math library would do the trick in software.

Thanks for all the comments - in the end … well I always knew that the best solution was going to be quaternion based … so I closed my eyes, wished a lot and lo and behold another nice quaternion camera class with SLERP too. Once you believe in magic, quaternions really work so well … and if you’re really lucky, you can sort of get a brief sense of how they work !