Why left and right buffers for stereo?

I’m not really a beginner, but this might be a dumb question… What is the advantage of having left and right buffers for stereographic rendering? If you have to render the scene twice, it seems like you might as well render sequentially into the same buffer, or else render into two separate GL contexts, depending on what you want to do with the images.

Now, if there were separate left and right modelview and projection matrices as well as left and right buffers, then you could render once to get the two images, which would be cool.

It’s a legacy thing, it was mostly used for those old VR helmets, it was before MRTs or FBOs, so today we wouldn’t even bother.

Even with FBOs etc, you need two separate framebuffers for the video hardware to scan out of.

With a stereo pixel format, the video hardware is going to flip between left and right at the refresh rate of the display device, regardless of how often you update the content.

In other words, if you can guarantee that you can continually render your frames faster than the device display rate, then you can manually implement stereo with only one framebuffer. Otherwise you need two buffers allocated.