What is focal length in OpenGL?

I am trying to teach my application to do 2 color stereo anaglyphs.

I am using a paper on stereo projection written by Paul Bourke to create asymmetric offset frustums for the left and right eye views of my image.

The text and code from Paul Bourke’s paper uses the “focal length” of the view to calculate the appropriate eye separation for left and right eye views.

What is focal length in this context? It’s clearly not lens focal length as the term is used in photography. An illustration from the paper suggests that it might be the distance from the camera position to the far plane of the frustum, but it’s not that either.

The sample code sets a fixed focal length of 100, and a near clipping plane of focal_length/5.

Is the focal length the distance to the furthest point in the bounding sphere of my 3D objects?

My app allows the user to change the distance from the “camera” to my 3D object, which creates closeup views of my 3D object. Presumably, that will change the focal length, but maybe not?

Any help would be appreciated.

You should ask the author of the paper.

focal length should be distance between the origin
and the point where the 2 viewpoints cross over. At this point there will appear to be no stereo separation. This point will appear to the viewer to be at the surface of his/her monitor. Any point past the focal point in the scene will appear behind the monitor. Any point before the focal point will appear to be in front of the monitor.

hope this helps
i think the guys diagrams are confusing and not helpful at all really


As another poster suggested, I asked the author of the paper directly. He confirmed what you said, that “focal length” is the zero parallax point, where the two views line up.

Here’s another somewhat related question.

My app plots a single 3D object, a polygon mesh that is a height map of a fractal image. The shape can be scaled or rotated arbitrarily.

I’d like for my stereoscopic view to be able to set the zero parallax point anywhere from the frontmost point in the object to the rear-most point in the object. I will have an input value that ranges from -1 to 1, where 1 places the object completely in front of the screen, 0 centers the object on the plane of the screen, and 1 places the object completely behind the plane of the screen.

How do I (efficiently) calculate the plane of the closest point in my object and the plane of the furthest point in my object, at the current scale and rotation?

To plot my object, I set the current matrix mode to the model view, use gluLookAt to set the camera position, and then glRotate to rotate my model view.

I’d go with leaving the model view matrix alone
and just change the projection matrix

If you set stereo with the model view matrix you will introduce parallax errors.


I should have been clearer. I use the model view matrix to scale, rotate, and place my object.

I use the projection matrix to adjust the display frustum for the left and right eye camera views.

I use the model view matrix to shift the camera position for the left and right eye views.

This is as the paper by Paul Bourke describes.

The parallax shifts from moving the camera are DESIRED, not an error. In natural vision each eye has a parallax shift, and that is what creates the stereoscopic effect.

What I want to do is figure out the front plane and back bounding plane of my object from camera position. I could use those values to set the near and far limits of the zero parallax distance (or “focal length” as Paul Bourke’s paper names it)

I’m no expert on 3D modeling, so I need help figuring this out.