I think you’ve missed some the subtle implications in EXT_rt and my post. You don’t see how stencil buffer is useful as a render target? With EXT_rt (or uberbuffers or…) you need to provide all the buffers (color, depth, stencil, etc.) that are part of the pixel format in classic, on-screen rendering. Saying the stencil buffer isn’t useful as a render target is the same as saying the stencil buffer isn’t useful on-screen rendering. I’m sure Carmack would disagree.
The depth-stencil format was an ugly hack. It was a one-off fix that didn’t solve the underlying problem. What does it mean to use a “depth stencil” texture for texture mapping? It’s nonsense, but the API would have to allow it. When you bring accumulation buffers and multisample buffers into the equation, it falls apart even more.
Since there’s so much confusion about it, I’ll give a concrete example. With EXT_rt you could say, “I have an RGB332 texture, and 16-bit depth buffer, and an 8-bit stencil buffer that I want to render to.” I know of no hardware that can draw to that combination of buffers. However, the way the API is designed, the driver has to do it. The only options are to fallback to software rendering or internally (i.e., behind the application’s back) draw to a different buffer and copy the results. Both of which, IMHO, suck. One the one hand you have unacceptable performance, and on the other hand you have invariance issues. For a driver writer, it’s a lose/lose situation.
To make it even worse, I seem to recall that you could specify a compressed texture as a render target. I don’t even want to go there…
Uberbuffers, on the other hand, has a really complicated, heavy-weight mechanism for the application to describe what kinds of buffers it wanted, then as the driver what it could do. It was something like a radiation mutated ChoosePixelFormat. It was evil and universally despised.
I hope people can now see why I get so irritated when the WG is accused of arguing over petty crap.
Given the above, I suspect that this new version has created a bunch of new texture formats that are required-use for render targets. And that this extension forbids the use of just any old texture as a render target, requiring the use of specific textures.
As such, if it should ever arise that an already existing texture needs to be a render target (without foreknowledge. Say, I’m writing a library that someone else uses), then I, the user, must create a new renderable-texture, draw the old texture onto it, and delete the old texture. These are things that a driver has both the right and the responsibility to do for us.
To answer your questions, yes and no. Handling the situation you describe was one of our specific design goals. In fact, that was one of the big gripes people had with uberbuffers, not surprisingly. The only times you are required to use new formats is if you’re rendering to something that you can’t texture from (i.e., stencil) or if you don’t intend to texture from it (i.e., render using a depth buffer that you won’t ever use as a depth texture). In the latter case it just provides some optimization opportunities for the driver.
Like I’ve said a whole bunch of times…the resulting API is quite clean and gives quite a lot of functionality. I really think people will like it.
tang_m, V-man: Our scope has been limited to replacing pbuffers. Any functionality beyond that will be a follow-on effort. I think that’s one of the problems we had at the start. We bit off way too much at once.
Zak: The ARB meeting is next week.