EXT_render_target

It’s really nice to finally see this long awaited extension showing up. There really IS a god :slight_smile: . Here’s what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should’ve been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn’t think a lot about this, so there may be some issues that prevent this from beeing working.

…I wrote this last night, but the forums went down before I could post it.

Originally posted by evanGLizr:
[b]Some notes & doubts:

  • Why not STENCIL only textures? If the graphics card doesn’t support stencil only, it can always create internally a stencil depth of the minimum depth size. I guess the problem comes when the app specifies one drawable for STENCIL and another for DEPTH separately? Is it too much driver work to create a combined texture on the fly and then copy back to one and the other whenever the rendertarget is changed (only necessary when the hardware does not support separate stencil & depth addresses). [/b]
    The goal here was to make the common case easy. There are no STENCIL textures today, but if that changed in the future, we could easily add that support.

[b]

  • Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn’t impose a problem.
    [/b]
    Agreed that there’s no obvious inability to support borders.

[b]

  • Interactions with compressed textures. Probably you won’t be able to render to these. [/b]
    You can render to a texture whose internal format you’ve requested to be compressed. The actual internal format you get will almost certainly not be compressed.

This shouldn’t be a big deal, because if you
want to render-to-texture, you probably want
a format that can be rendered to.

[b]

  • Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don’t render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
    This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one).

    [/b]
    I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?

[b]

  • What’s the interaction with SwapBuffers? In theory none (i.e. SwapBuffers always swaps the FRAMEBUFFER drawable), but note that this means that if you want to do things like triple buffering or offscreen rendering, whenever you want to present the results you need to render a full screen quad, is that desirable? [/b]
    Interaction with the swap chain will be provided by a layered extension. It was recognized that this was important, but not something that should delay the completion of this spec.

[b]

  • Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE. [/b]
    What interactions? I think this is WYEIWYG (what-you-expect-is-what-you-get). That’s the goal, at least.

[b]

  • Interactions of the texture format with the previous functions: what happens if you do a glReadPixels when the internal format of the texture is GL_RED? What about packed component textures (GL_R5G6B5…) ? Is any texture format supported as rendertarget? If not, how can the application know which formats are available, by trial and error? [/b]
    The internal format you request is a hint. It’s a hint when you do TexImage2D() and it’s a hint when you render to it. The driver is supposed to do the best it can based on your hint.

Good bets for render-ability are not too hard to guess: RGBA8, RGBA16F, RGBA32F, and their RGB equivalents.

[b]

  • Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get? [/b]
    Why wouldn’t this work? I’m not sure I understand.

[b]

  • Interactions with the current pixelformat: What happens if the current pixelformat has no alpha but the texture does, is the destination alpha available for rendering when the drawable is TEXTURE?. There’s some mention of this in the spec part, I think that the pixelformat should be changed to match the one of the texture when you change the drawable (so you can do destination alpha rendering even if your FRAMEBUFFER doesn’t have alpha). [/b]
    If the texture you’re rendering to has alpha,
    then your TEXTURE drawable has dstalpha.

There’s no interaction with the FRAMEBUFFER drawable. They’re completely separate.

[b]

  • Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:
    • when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
    • when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined? [/b]
      I would defer to the way that Tex{Sub}Image*() behaves in these situations. Is there reason to do otherwise?

[b]

  • Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?.[/b]
    There is a significant distinction between rendering to the framebuffer and rendering to offscreen. There are no shared resources, no pixel ownership tests, no window system and display peculiarities when you’re just rendering to texture. That makes it simpler and “better” to just have a Big Switch. I expect we’ll add that complexity when it’s needed as a separate extension. In the mean time most people will be able to get along fine without it.

Thanks -
Cass

Originally posted by glitch:
[b]hi, here a little cosmetic question

and then to switch between rendertarget simply call :

*BindRenderTarget(id) // id = 0 mean framebuffer

[/b]
One of the nice simplicities of this spec (IMO) is that it allows rendering to regular old Texture Objects. There’s no need to create a new object and associated API.

There will likely be desire to create Drawable objects in the future, but that was intentionally left out to keep from bogging down on issues that were not on the critical path.

Thanks -
Cass

Does this mean we can do all our offscreen rendering into textures assuming POT resolutions? Are pixel operations going to be slow? For example, glReadPixels performs pretty badly on RTT PBuffers right now.

Does it make sense to have an OFFSCREEN render target?

Much simpler API, avoid context switches, the ability to have simultaneous read/write access to the texture, the ability to render-to-3D-texture. This is pretty good, but Korval raises an interesting point wrt Superbuffers. With VBO, PBO and RenderTarget, we’re pretty close to the functionality of Superbuffers. It is still missing the offscreen stuff and the swap chain stuff, but I take it this will be layered in the future. Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-Won

I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?
Well the spec says the following regarding issue 15:

  1. If a texture is bound for both render and texturing purposes, should the results of rendering be undefined or should INVALID_OPERATION be generated at glBegin()?

UNRESOLVED

Undefined results allow an application to render to a section of the texture that is not being sourced by normal texture operations.
That sounds like the behaviour is undefined, but maybe I missed something? Having undefined baehaviour isn’t very good, because it doesn’t allow you to rely on the functionality. It would be better to define results when the source and destination pixels/texels are disjoint and otherwise leave the results undefined.

Overall I think the proposal is great, very simple and elegant API.

Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-Won[/QB]
i think they plan on dropping superbuffers… not sure, but it definitely looks that way.

Originally posted by Won:
[b]Is there a need for superbuffers if all it does is provide a unified API? Or are there some features of superbuffers that would still be missing?

-Won[/b]
Superbuffers would allow you to manage and attach sub-memories. You could take two “classic” mipmapped textures and mix and match individual mipmap levels to form a new mipmap pyramid, without doing copies.
This doesn’t seem to be terribly useful.

[quote][b]

  • Regarding issue 15, why not make it possible to use the same texture as drawable and texture source (as long as you don’t render and read from the same levels/faces/slices, in which case you just say that the result is undefined).
    This is very useful for doing programmable mipmap level generation (render to the lower-detail level reading from the higher-detail one).

    [/b]
    I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?

Thanks -
Cass[/QUOTE]That is the intent indeed. The spec does specify that this is possible to render to a mip-map level of a texture object while sourcing from a different mip-map level. See section 4.4.4 and the custom mip-map generation example at the end. Looks to me we need to add something about cube-maps to the spec though!

Barthold

On rendering to texture with borders… the issue here is not whether there’s some incompatability with the spec but what the actual outcome in the texture is. There are two possible outcomes, the border is rendered to, and the border isn’t touched. I can imagine both outcomes desirable to different developers, however without rendering inclusive of the border some desirable things would be impossible. This is an observation, not a feature request, rendering to a texture border the way these things are sometimes laid out in memory potentially has some extremely serious implications for the design and complexity of an implementation so it would help if things were clear on the intended outcome when you render to a texture with a border image specified. Even the addressability of a texture border is unclear.

I assume by the ‘not a problem’ the inherent assumption is that texture border images are still entirely separate things and you can only render to the texture proper. I tend to think that given the useage borders get they ain’t worth the hassle/complexity, especially with other features like multitexture available now, but that’s just an opinion.

I would add glGenerateMipmapsEXT as another extension(it had to be done for a long time ago now).
I don’t know if rendertarget object would be desirable. Maybe it could save some state if one does a lot of rt switching…

And:
Who and why did invented render_to_texture extension? Funny, I wanted to post a topic like : “Do you agree that ARB_rtt is crap?”. But I could hold me back. That’s what I call destiny.

Why are you coming with this stuff only now? :slight_smile:

P.S. There is one more problem I would like to discuss… I writed a simple bloom effect demo some weeks ago. The bloom was done using a gaussian blur(via a fp - it was my first confrontation with glslang :slight_smile: ). But - I had to repeat this for a lot of times. Like:

  1. Bind Blurtex;
  2. DoBlur;
  3. CopyBlurTex;
  4. goto 1.

It would be nice if this extension would provide a simple way to do such stuff(rendering to a texture based on this texture)? Like having ability to copy texture(glCopyTexture(tex, newtex)) or allowing texture read while it is bound as rendertarget.

Altough it is a little bit offtopic:
Something like a “Pixel-Shader” (NOT the same as DX Pixel Shader) would be nice too, a program which operates on each pixel stored in a render traget.

i think the blur would be bether done directly with pingpong, as you need 2 textures, too… it would mean one copy less…

int i:1; // wraps around 1… 1+1 = 0 a bool:D

i = 0;

textures blur[2];

drawto[blur[i]];

foreach(pass) {
readfrom(blur[i]);
writeblurto(blur[i+1]);
++i;
}

final blurred texture in blur[i];

for me, the clearest method.

Originally posted by Corrail:
Altough it is a little bit offtopic:
Something like a “Pixel-Shader” (NOT the same as DX Pixel Shader) would be nice too, a program which operates on each pixel stored in a render traget.

would then be a pixel transform, right? theoretically should it be doable by simply drawing a huge quad over the buffer, and using it at the same time as texture… dunno, the spec is relaxed, it should be possible. depends on hw, though…

Originally posted by dorbie:
On rendering to texture with borders… the issue here is not whether there’s some incompatability with the spec but what the actual outcome in the texture is. There are two possible outcomes, the border is rendered to, and the border isn’t touched.
My expectation is that the border would be rendered to. I agree there could be performance consequences because some implementations might require render->copy.

I’m ok with that for the sake of keeping the common case simple.

Thanks -
Cass

Edit: Finish my thought.

Jens, good points on GenerateMipMaps and gl*Mask. Will
raise masking as an issue.

Thanks for the feedback.

JR

Originally posted by Jens Scheddin:
It’s really nice to finally see this long awaited extension showing up. There really IS a god :slight_smile: . Here’s what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should’ve been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn’t think a lot about this, so there may be some issues that prevent this from beeing working.

Jens, good points on GenerateMipMaps and gl*Mask. Will
raise masking as an issue.

Thanks for the feedback.

JR

Originally posted by Jens Scheddin:
It’s really nice to finally see this long awaited extension showing up. There really IS a god :slight_smile: . Here’s what quickly came into my mind while flying over the spec:
I think the ability to create mip maps via glGenerateMipMapsEXT() should’ve been exposed as an independant ARB extension long time ago. It seems a bit strange to have all these high level ARB extensions but automatic mipmap generation by just an SGIS extension.
Another thing that would be great is a render target that behaves like the framebuffer where you can switch color/depth/stencil writes with gl*Mask(). So you just have to call glBindRenderTarget() and render to a texture or the framebuffer. I didn’t think a lot about this, so there may be some issues that prevent this from beeing working.

One of the nice simplicities of this spec (IMO) is that it allows rendering to regular old Texture Objects. There’s no need to create a new object and associated API.
Absolutely agreed.

I looked at one of ATi’s presentations on superbuffers, and I found the API to be very involved. If I recall correctly, it went something like this. First, you have to allocate a memory buffer. To fill it, you have to bind it to a texture and use glTexSubImage. To render to it, you unbind it as a texture and rebind it to a framebuffer object (probably one you create), then do your rendering.

This one… I allocate a texture as normal, bind it as the render target, and render with it. Simple.

When the time comes to allow VBO’s to be used directly as render targets, the API doesn’t change. Instead of specifying a texture object, I specify a VBO object. It doesn’t get any easier than that.

Superbuffers might offer a bit more control over the memory buffers of textures and so forth, but I think EXT_render_target is a better abstraction of the functionality that we need.

In a way, superbuffers compared to EXT_render_target reminds me of how VAO compares to VBO. VAO is very complicated, forcing the use of an entirely new API for vertex array binding. VBO simply overloads the normal conventions that we’re used to.

This extension, outside of clarifying the few lingering issues being discussed, is just clearly the best way to expose this kind of functionality. We don’t really need any new functions to create memory buffers; glTexImage and glBufferData are both perfectly acceptable. All that was really needed was a way to bind the texture to a conceptual framebuffer. Now, we have that (or will soon enough, once the spec is finalized and implemented).

Why are you coming with this stuff only now?
I wondered that about VBO for the longest time. It seems so obvious in hindsight.

If I am allowed to summarize :wink: : yes, give it to us! We don’t want any superbuffers more :stuck_out_tongue: !

@dave:
I also came up with this solution, but: changing the drawable could be more expencive then copying.
Who knows, thought…
Two buffers are also nice, howewer one virual would be kind of more… elegant

I’d like to thank the contributors of the extension being discussed for the initiative to release the extension spec as an RFC. I think this will be really useful for developers, that will be able to learn about the new features sooner and to get used with the new apis. I also hope it will be useful for IHVs that will learn earlier what the developers think. I hope this won’t be a singular event, but a common practice in the future.

That said, I think that the extension still lacks some functionality. However, as someone mentioned I agree that it’s better to have a clean and minimal spec that works, to extend it later with the requiered extensions.

Originally posted by cass:
[b] [quote][b]

  • Interactions with textures with borders. In theory using textures with borders as rendertargets shouldn’t impose a problem.
    [/b]
    Agreed that there’s no obvious inability to support borders.
    [/QUOTE][/b]

Ok, so one can use scissor/stencil to render only to the interior of the texture and stencil to render to the border. For this stencil usage, maybe it would be desirable to be able to use the framebuffer depth/stencil instead of having to create a depthstencil texture for each texture size?
This is one of the cases where it would be simpler to be able to use heterogeneous rendertarget sizes, when - for example - you are only interested on the color buffer texture, but you still want to have depth/stencil testing without having to create one depthstencil texture for each texture size you have).

[b]
[quote][b]

  • Interactions with compressed textures. Probably you won’t be able to render to these. [/b]
    You can render to a texture whose internal format you’ve requested to be compressed. The actual internal format you get will almost certainly not be compressed.

This shouldn’t be a big deal, because if you want to render-to-texture, you probably want a format that can be rendered to.
[/QUOTE][/b]

Sure, but the problem is “which formats can be rendered to”? From what you say, there are no restrictions (other than “color-formats” must be rendered as COLOR rendertargets and “depth-formats” as DEPTH/DEPTHSTENCIL). More on this below.

[b]
[quote][b]

  • Regarding issue 15, why not make it possible to use the same texture as drawable and texture source

    [/b]
    I believe the goal is to allow rendering to and texturing from the same texture as long as it can be proved that you cannot do both to any texels simultaneously. Does the spec wording not make that clear?
    [/QUOTE][/b]
    As some other people already pointed out, issue nr. 15 says “UNRESOLVED” and 4.4.4 doesn’t say anything about rendering to different faces/slices.

[b]
[quote][b]

  • Interactions with glReadpixels, glCopyPixels and glDrawPixels when the drawable is TEXTURE. [/b]
    What interactions? I think this is WYEIWYG (what-you-expect-is-what-you-get). That’s the goal, at least.
    [/QUOTE][/b]
    The interactions come with the texture format, see below.

[b]
The internal format you request is a hint. It’s a hint when you do TexImage2D() and it’s a hint when you render to it. The driver is supposed to do the best it can based on your hint.

Good bets for render-ability are not too hard to guess: RGBA8, RGBA16F, RGBA32F, and their RGB equivalents.
[/b]
When it comes to specs and standards, I don’t like guessing. The issue here is if GL_LUMINANCE is a valid color rendertarget (as spec’ed in 4.4.6), what’s the value of glGetInteger(GL_GREEN_BITS) when that texture is set as a rendertarget? Should they follow table 3.15 on OpenGL 1.5 spec (where single component is mapped to R)? Or should they follow the convention described in the “Pixel Transfer Operations” in pg. 192 of OpenGL 1.5 (for example, Luminance pixels are R+G+B).

I know the internal format is a hint, but the driver will have already allocated the texture in a given internal format and if that internal format doesn’t match a renderable format, it can do three things:
a) Fail to set it as a rendertarget (if this is valid, it should be noted in the spec).
b) Reallocate the memory for that texture in a renderable internalformat. This will probably cause an expansion of the texture.
c) Use a temp rendertarget and reformat it at rendertarget flush.
The other bottom line is are you exposing the internal format of a texture with this extension? If you set a texture as a rendertarget and do a glGetInteger(GL_RED_BITS), is it obliged to return the internal format? (so option b) before wouldn’t be valid).

Ok, I overlooked the following paragraph from the spec

When a texture is first bound for rendering the internal format of the
texture might change to a format that is compatible as a rendering
destination.  If the format changes the new format will be guided by
the texture's requested format, and the existing contents of the
texture will be converted to the new format.  Queries with glGet of
GL_DEPTH_BITS, GL_RED_BITS, etc. can be used to determine the actual
precision provided.

So it looks like alternative b) is the one the spec favors, although it should be noted that this goes against p. 128 of OpenGL 1.5 spec:

A GL implementation may vary its allocation of internal component resolution
or compressed internal format based on any TexImage3D, TexImage2D (see below),
or TexImage1D (see below) parameter (except target), but the allocation and
chosen compressed image format must not be a function of any other state and cannot
be changed once they are established. In addition, the choice of a compressed
image format may not be affected by the data parameter. Allocations must be invariant;
the same allocation and compressed image format must be chosen each
time a texture image is specified with the same parameter values. These allocation
rules also apply to proxy textures, which are described in section 3.8.11.

[b]
[quote][b]

  • Interactions with glGetTexImage over the same texture object used as drawable. Can you do any glGetTexImage at all? What results would you get? [/b]
    Why wouldn’t this work? I’m not sure I understand.
    [/b][/QUOTE]What happens if you do a glGetTexImage on the same texture that is currently bound as rendertarget? Is that allowed at all? If it is, Will that get the current texture values or the ones before setting it as a rendertarget? (i.e. will a glGetTexImage cause a flush of the drawable?).

[Removed the issues on pixelformats, they get answered with the paragraph I overlooked from the spec]

[b]
[quote][b]

  • Interactions with texture sharing (wglShareLists). Does wglMakeCurrent force the copy of the current render target to the texture (this would solve all the single-thread problems). Cases:
    • when the current rendertarget texture is used as source on another context. In the multithread case this should have the same limitations as when using the
    • when the given texture object is used as rendertarget in two different contexts. in the multithread case you have to resort to say that rendering to the same texture object from two different threads is undefined? [/b]
      I would defer to the way that Tex{Sub}Image*() behaves in these situations. Is there reason to do otherwise?
      [/QUOTE][/b]

Good point, too bad that is not specified anywhere :/. So from what you say wglMakeCurrent doesn’t cause a flush of the rendertarget.
Would this be a good time to “de facto” relax the condition that wglShareLists only works if all the contexts share the same pixelformat? What the MSDN says is that if they don’t, the result is “implementation dependent”, so the implementation could make that you can share among different pixelformats as long as all the renderers are the same.