I appreciate that supports stereo as far as knowing about stereo buffers, but I think stereo rendering can be improved.
Consider this: a stereo application has to select one eye, and render the geometry to the appropriate buffer. It then selects the other eye, reconfigures the projection/modelview and re-renders the same geometry. now, traversing the scene graph and computing what geometry is visible takes time. True; you could possibly capture the geometry in a display list when you render one eye (using compile&execute), but display lists tend to slow things down. (Noticable speed loss on a sgi/o2, not so noticable under an onyx2).
How about this for an idea. If S is the modelview configuration to move the geometry for each eye, then the projection eqn becomes
where P and M are the projeciton and modelview matricies respectively. Suppose you actually post-multiply the projection matrix by S, so you have
where Q = PS from before, and you get the same effect. Now, suppose Q1 and Q2 are the projection/stereo offsets for the left and right cameras, then the trick is for stereo mode could be to push BOTH matricies onto the projection stack. On a multiple pipe rendering system, one half of the pipes could use the top of the projection matrix stack and render to one buffer, and the other half could use the the second matrix and the OTHER buffer. thus, the geometry only needs to be specified once, and it’s projected and rastered twice.
problems with this: well, obviously the geometry will change in each eye. If the user is close to the screen in a CAVE-like system, then a lot of effort might be wasted transforming geometry that is ultimately clipped by one view, but I reckon it would be a pretty cool feature…
glDrawBuffer(GL_BACK); /* selects back left and back right implicitly */
glTranslate(-eye_sep, 0.0, 0.0); /* left eye /
glTranslate(eye_sep2.0, 0.0, 0.0); /* right eye */
glVertex(); <- projected by TWO matricies.