How to use tripple buffering with ogl

I never lost a texture yet.

Originally posted by bgl:
At some level, I think this means that an
OpenGL implementation needs to keep a copy
of all textures in memory, because after
upload you might lose the copy on the card,
but the user would still expect it to be
there.

Yep, exactly true.

I could add an “although” phrase after that, but that’d take a whole lot of explaining.

  • Matt

“The latency issue is negligible compared to the performance gains”
huh?

Depends on how you’re measuring performance. My preferred measurement is “max latency”, closely followed by “latency jitter”.

What I meant was that since the probably most common case is that the frame rate often is lower than refresh rate, there can be considerable gains in performance = frame rate with using triple buffering.
In my experience, the latency is hardly noticeable.

But that extra buffer is why we need the extension, because you cant just go around taking away video memory from an application that isnt expecting it.

Why would an application want to keep track of how much local memory is available? And isn’t that impossible anyway since if I am not mistaken you can force supersampling AA on Geforce and Radeon drivers?

[This message has been edited by tobbe_t2 (edited 01-11-2001).]

Originally posted by tobbe_t2:
Why would an application want to keep track of how much local memory is available? And isn’t that impossible anyway since if I am not mistaken you can force supersampling AA on Geforce and Radeon drivers?

No, Im not saying that some application track memory (although there are probably some that try). Im talking about games that have been designed with a minimum available memory requirement in mind. If a game is designed to only run at 800x600 res in 16-bit color, then you know it needs about 1.8MB for a double buffered frame buffer. If it then has another 1.5MB in textures, that would be a total of about 3.3MB. In this case the developers/publisher/whatever may have decided to list a “minimum system requirement” of 4MB video memory on the box. If you now force that app to have 3 buffers, it needs approximately 4.2 MB of video memory, and the listed requirement of 4MB is no longer sufficient. People that bought the product to run on their 4MB whatever-brand card may find that the app runs at less than acceptable frame rates.

As a driver developer, you cant just go changing stuff like this without breaking compatibility with already shipped applications. Now, if you want to add something in the display properties to override the app (which is what the anti-aliasing setting is) but leave it defaulted to off, that is fine. These things are for advanced users (who would probably know how to fix the problem if it occurred) and the average user would never even know about it. This would allow you to increase performance in existing software that doesnt know about the new GL_triple_buffer extension. However, we still need the extension so that when I create my next app, I can use it to get max performance on any machine without requiring the user to change any display settings.

I agree. But I still think a driver setting would be nice (alongside the extension) - with a default “off” state, due to the reasons you gave.

Wow!

I didn´t surf for about one week and then I saw that there are a lot of people who are interested in this. I didnt expect so many replies. However, it was funny to see all those tables where you listed the pros and cons of tripple-buffering and the discussions about wether one would want to use it or not.
I thank all of you for this.
However I have one question: Why do some of you tell me that it has no advantages and that I should not use it? I didn´t ask if it is better than double-buffering but how to use it!
OK, I now know that it seems not to be possible. That´s a pity. I agree with all those who think there should be such an extension and I think there will be one soon.

Thank you, again.