I am just attempting to learn about pbuffers and binding them directly to textures in an attempt to use a GeforceFX to speed up a simulation. I noticed a couple of things that surprised me:
- pbuffers apparently can be double buffered, but:
- if any color buffer of a pbuffer is bound to a texture object, then NONE of the color buffers of the pbuffer can be used for rendering.
- SwapBuffers does nothing when a pbuffer color buffer is bound (I’m not clear yet whether this means “when bound to a texture object” or “when bound as the rendering context”)
I think it would be useful to be able to bind the “front” color buffer of a pbuffer to a texture object, and then use that texture while rendering to the “back” color buffer of the pbuffer. The allow SwapBuffers to flip the two color buffers – the one that was just rendered to becomes the newly bound texture and the one that used to be the texture becomes the destination for rendering.
This effect can be done with two pbuffers and several binding/unbinding calls at each “flip” point, but the SwapBuffers would seem to be:
- less error/leak prone
- more likely to provide OpenGL implementors with semantic information about what the programmer is attempting to do, which might lead to more efficient implementations.