EXT_render_target

I must say that the extension looks very good. :slight_smile: I’ve wanted something like this all the time. I hoped for the super_buffers extension to replace the old WGL_ARB_render_texture stuff. But after hearing “soon” about it for over a year now I think I can safely declare it dead. If EXT_render_target is what I get I’m more than happy. It has all the functionality I need. I’m even doubtful that render to vertex array will be all that useful when hardware will have access to textures in the vertex shader.

The parameter IMAGE_EXT determines the active image layer of
3-dimensional textures attached to the texture drawable. If an
attached texture is not 3-dimensional, then the value of IMAGE_EXT is
ignored for that texture.
Maybe this part should mention the orientation of the layers, as in parallel to the s,t-plane.

A few things I didn’t understand :

It’s not clear to me the interaction between color and depth texture. How do you make them both drawable and the current render target?

If you have many color and depth textures, how do you pick the color and depth to render to?

glDrawableEXT(GL_TEXTURE_EXT); doesn’t convey this information and glBindTexture(GL_TEXTURE_2D, 0); says what here?

Originally posted by V-man:
Maybe this part should mention the orientation of the layers, as in parallel to the s,t-plane.

This is intended to work effectively in the same way that rendering to the back buffer followed by CopyTexSubImage3D(). Do you feel we need an issue or spec language to make this more explicit?

[b]
A few things I didn’t understand :

It’s not clear to me the interaction between color and depth texture. How do you make them both drawable and the current render target?

If you have many color and depth textures, how do you pick the color and depth to render to?

[quote][qb]
glDrawableEXT(GL_TEXTURE_EXT); doesn’t convey this information and glBindTexture(GL_TEXTURE_2D, 0); says what here?[/b]
RenderTarget() is how you specify which textures are the current render targets for color and depth. I’m not sure what you mean by interaction.

Thanks for the feedback -
Cass

hi,

i’ve had some some conceptual reflexion about opengl texture/pbo/vbo and rendertarget and how can it be the most intuitive for all developer and easy for us to integrate in current state-of-the-art 3D engine.

IMHO, the only 2 low level object in opengl seems to be pbo and vbo; texture is a subset of pbo (don’t think it’s wired this way in opengl).

So can’t it be simplest to treat pbo and vbo as the only potential rendertargets (the <target> parameter of glRenderTarget) and after that having the ability to use a pbo as a texture.

This is a pure conceptual brain storming so i don’t expect any feedback on this approach (as technical specs aim technical advices :slight_smile: ). I was just trying to find the nicest / most intuitive way for all pbo/vbo/texture/rendertarget stuff.

++

IMHO, the only 2 low level object in opengl seems to be pbo and vbo; texture is a subset of pbo (don’t think it’s wired this way in opengl).
Well, since VBO’s and PBO are the same thing (buffer objects. Different uses, but the same kind of memory, and can be used interchangably), this would really provide only one thing.

Also, since buffer objects have no intrinsic concept of dimentionality (they are all flat arrays), it’s kind of difficult to bind them as a render target/source image directly. There’s a difference between using glTexSubImage with a buffer object to fill it, and actually saying that the primary location of the texture data is in the buffer object (which PBO doesn’t provide). That functionality doesn’t exist. PBO’s are used to copy pixel data, as a means of transfering pixel data to/from various textures/framebuffer, while still providing the other functionality for buffer object (that is, source vertex data).

I just thought that making opengl full pbo/vbo (say buffer object) centric would have been really interesting … but its not the question here and that all about my dreams :wink:

Anyway, thx for your answer Korval

cheers

Originally posted by cass:
This is intended to work effectively in the same way that rendering to the back buffer followed by CopyTexSubImage3D(). Do you feel we need an issue or spec language to make this more explicit?
Sure, there is something to be said for clarity.
Someone might want to have an extension that allows you to render in other orientations (t,r or s,r)

glRenderTargetEXT! OK, that answers my question.

Then in the “New Tokens” section would be nice to see what the functions take.

New Tokens

Accepted by &lt;drawable&gt; parameter of Drawable, RenderTarget,
DrawableParameter, and GetDrawableParameter:
    FRAMEBUFFER_EXT                     0x????

Accepted by <drawable> parameter of RenderTarget
DrawableParameter, and GetDrawableParameter
GL_TEXTURE

Yes, other specs do this and they don’t mention the hex value for the old tokens.

=======other stuff
don’t some vendors provide 16 bit stencil, 32 bit stencil? Maybe there is a need to have target GL_STENCIL besides GL_COLOR, GL_DEPTH

=======other stuff2
It seems as if not all vendors can provide render to depth(texture) alone. So what does this mean when it comes to this extension?

Then in the “New Tokens” section would be nice to see what the functions take.
Well, as Cass pointed out, the spec isn’t complete yet.

don’t some vendors provide 16 bit stencil, 32 bit stencil?
Not that I’m aware of. Certainly no consumer hardware does.

It seems as if not all vendors can provide render to depth(texture) alone.
Who? If they supported color mask, and rendering to a depth texture at all, then they can support this.

I might have missed something, but just to make sure I didn’t: :wink:

The text says GenerateMipmapEXT() is applied to the currently bound texture. The texture used as the render target is generally not bound when it’s being rendered to (exception: last example, building MM levels from the base level). So to use GenerateMipmapEXT() after finishing my texture rendering I have to bind it, even if I’m not going to use it as a texture here. Correct?

Given that texture binds are not free, what was the reason for not allowing the GENERATE_MIPMAP texture parameter to take effect? There is no clearly defined “done drawing” point, but RenderTarget and Drawable look like reasonable candidates.

The separate GenerateMipmap function has its uses, but being able to generate the pyramid at the point the base level is created seems to be an easier design for generic systems. Alternatively you could promise me the BindTexture is free at that point, but I’m a little sceptic about that.

Thanks

Dirk

Originally posted by dirk:
Alternatively you could promise me the BindTexture is free at that point, but I’m a little sceptic about that.

Hi Dirk,

We talked about this, and one idea was to have a dummy binding point, just used for modifying objects but was not rendering state. Something like

ActiveTexture( DUMMY );
BindTexture(…);
GenerateMipmap(…);

This binding texture objects to the “DUMMY” slot would be cheap. I’m ambivalent about this approach though. I like it because it avoids modifying render state just to modify an object, but it is does nothing to solve the problem of too much indirection in the texture API.

Thanks -
Cass

I’m concerned with 2 things:

Missing Feature #1: (issue 3) Ability to use depth-texture of size larger than have bigger size than bound color-render-target. (mentioned by Korval)

Missing Feature #2: Ability to use framebuffer’s backbuffer bound to some targets (example: DEPTH and STENCIL) at the same time with textures bound to remaining targets (example: COLOR). (mentioned by evanGLizer)

(castano) That said, I think that the extension still lacks some functionality. However, as someone mentioned I agree that it’s better to have a clean and minimal spec that works, to extend it later with the requiered extensions.
I could agree, if we were talking about some new, unproven, experimental functionality. But this isn’t so.

If anyone happens to still have the DirectX 7 SDK, see chapter “Common techniques and special effects / Cubic environment mapping”, or code sample “envcube”. The described technique explicitly reuses framebuffer’s depth buffer when rendering to each cube face. No need to mention that framebuffer almost always has different size than face of cube map texture. These are examples of the #1 and #2 in use. In DirectX 8 and 9 this abilities remained, only interface was changed, making usage much more obvious. (Actually, DirectX 6 has SetRenderTarget too, but lacking this version’s full SDK, I can’t tell anything about it’s render-target flexiblity)

So, we are not talking about anything exotic. Both #1 and #2 are actually bread and butter features of DirectX, about 4½ years old, being in use till today since introduction of DirectX 7 and GeForce 256. If the #1 or #2 require any specific HW support, then it is effectively a requirement for any HW which exposes Cube Mapping under DirectX.

Should we refrain from including into ext the features which are de facto standard and which were proven to be useful? I think all RTT methods in OpenGL were lagging behind D3D long enough. If the intention for developing minimal spec was to provide granularity in exposing features, in order to implement render-target for pre-GeForce 256 HW too, then I’d understand this. But in that case, I hope there will be no big delay between the ‘minimal’ and ‘full’ render target version releases.

(cass) There is a significant distinction between rendering to the framebuffer and rendering to offscreen. There are no shared resources, no pixel ownership tests, no window system and display peculiarities when you’re just rendering to texture.
As I wrote above, DirectX has no problem with that. I’ve tried to imagine what sort of problems you actually mean (does it apply only to color (displayable) data (thus not to depth/stencil)? or only single-buffered PF? or overlapping GL windows?). I think in most pesimistic case any such problem could be solved by allocation of additional color buffer and a copy operation once per frame. It is only matter of decision which side should be supposed to do the job: driver or user (with use of the new render_target extension). If my guesses are correct, I’d vote for driver side, because in fullscreen mode the pixel ownership and related stuff ceases to exist, so driver could enter more effective path automatically.

(barthold) Korval, what would you use that functionality for? The difficulty is in defining what happens when you have say a depth-texture that is bigger (or smaller) than the color-texture bound to the drawable. (…) Our initial idea was to keep it simple and not allow this. I would be interested in hearing otherwise.
I tried to provide example of usefullness in this thread:
Also, what happens when you re-use such a depth-texture with yet a different sized color-texture.
As in DirectX, contents of depth texture are invalid when sizes dont match. However it would be useful to retain contents of depth texture if COLOR were bound to 0 (“NULL” render target), because color-texture which was bound while rendering-to the depth-texure might be later used in texturing when rendering to depth-only render-target with our depth-texure still bound.

(evanGLizer) Do you really need glDrawable? Why not make that when glDrawableParameter for COLOR and DEPTH is zero the rendering is done to the FRAMEBUFFER? This would allow things like rendering to the color buffer of the FRAMEBUFFER but storing the depth in a texture (is that desirable?). I guess that the main reason to have glDrawable is for the future use of render to vertexarray as glDrawable parameter?.
If the Missing Feature #1 is going to be included in the spec, then zero will be needed for a “NULL” render target, just as it is proposed in issue 9. It is necessary to have ability to bind ‘something’ that doesn’t have dimensions, because otherwise they might interfere with other bound targets.

If the Missing Feature #2 is going to be included in the spec, ability to bind framebuffer to individual targets would require rewriting the interface, which might look like this:
glEnable/Disable(RENDER_TARGET) replaces glDrawable(TEXTURE/FRAMEBUFFER)
RenderTarget(COLOR/DEPTH/etc, GL_NONE, 0 /ignore me/)
RenderTarget(COLOR/DEPTH/etc, GL_FRAMEBUFFER, 0 /ignore me/)
RenderTarget(COLOR/DEPTH/etc, GL_TEXTURE, my_texture_object)

Looks like by the time I got around to reading the spec and the messages, a lot of things had already been discussed. As such I’m just going to say the extension is a good step forward, and I hope this sort of cleanup continues.

Thanks to those that are making it happen.
DN.

Not that I’m aware of. Certainly no consumer hardware does.
I thought some 3Dlabs cards could give 16 bit stencil. If we had a database of pixelformats for every card, it would be useful.

Who? If they supported color mask, and rendering to a depth texture at all, then they can support this.
ATI (R3xx). In D3D, you can’t do render to depth on ATI and also, is there a ARB extension for render to depth texture?

This extension is curious because you can choose a color and depth to go together, as opposed to the old p-buffer approach where they are paired always.
I’m sure that some vendors will have issues with this extension.

And if that is the case, a special internal format for glTexImage GL_RGB8_DEPTH24_STENCIL8 …

and allow glTexImage to fail if implementation can’t handle certain formats.

Originally posted by zeckensack:
Superbuffers would allow you to manage and attach sub-memories. You could take two “classic” mipmapped textures and mix and match individual mipmap levels to form a new mipmap pyramid, without doing copies.
This doesn’t seem to be terribly useful.
Maybe it can be very usefull for doing some kind of SGI’s “ClipMapping” which is very interesting for “continuous” terrain texture.
[ClipMapping Paper from SGI](http://www.cs.virginia.edu/~gfx/Courses/2002/ BigData/papers/Texturing/Clipmap.pdf )
Wishes,
Luis

I really hope this extension will be modified to permit binding rendering targets separable for color and depth/stencil. Like MZ said, this is already possible in DirectX and it is very useful. A simple example, I want to apply a filter only to a subset of scene objects. I render the non filtered objects to the frame buffer color altering its depth buffer. Then I set only the color drawable to a texture and render the filtered objects while using and altering the framebuffer depth (depth drawable set to zero or other specific value). On this texture I will apply the specific filters and finally I will add it to the frame color etc. With the extension as it is proposed now I cannot do this without an additional copy of the depth.

Missing Feature #2: Ability to use framebuffer’s backbuffer bound to some targets (example: DEPTH and STENCIL) at the same time with textures bound to remaining targets (example: COLOR). (mentioned by evanGLizer)

I agree that this feature is important. I think a clean API for all of this would get rid of the glDrawableEXT() and glDrawableParameter{if}EXT() functions and define a single glRenderTargetiEXT() function like the following.

void glRenderTargetiEXT(GLenum target, GLenum pname, GLint param);

<target> is either GL_COLOR or GL_DEPTH.

<pname> is GL_TARGET_DRAWABLE_EXT, GL_TARGET_TEXTURE_EXT, GL_TEXTURE_FACE_EXT, GL_TEXTURE_IMAGE_EXT, or GL_TEXTURE_LEVEL_EXT.

If <pname> is GL_TARGET_DRAWABLE_EXT, then <param> is GL_FRAMEBUFFER_EXT or GL_TEXTURE and determines where writes go for the buffer named by the <target> parameter. This allows situations such as rendering color to a texture while still rendering depth to the back buffer. The pixel ownership test should only apply to the depth buffer when both the color and depth targets are the framebuffer. (In my opinion, the pixel ownership test really ought to only apply if rendering directly to the front buffer. The spec seems to be ambiguous about this.)

If <pname> is GL_TARGET_TEXTURE_EXT, then <param> specifies a texture object as the target for writes to the buffer named by the <target> parameter when the drawable is GL_TEXTURE.

If <pname> is GL_TEXTURE_FACE_EXT, GL_TEXTURE_IMAGE_EXT, or GL_TEXTURE_LEVEL_EXT, then <pname> specifies the face, image, or mipmap level, respectively, for the texture drawable corresponding to the buffer named by the <target> parameter.

A glGetRenderTarget{if}vEXT() function would retrieve state in the expected way.

Is the glGenerateMipmapsEXT() function really necessary? It seems like a texture with GL_GENERATE_MIPMAP turned on should have its mipmaps implicitly generated as soon as it is no longer the active rendering target for either the color or depth buffers.

I think something Cass pointed out bears repeating.

Regardless of whether Direct3D supported a feature(s) or not, this extension does not have to expose everything. It can be restrictive to allow for easier initial implementation, and relax those restrictions in other extensions. Otherwise, you may have a situation where it is implemented on some cards, but not universally due to the lack of restriction.

And the D3D implementation could easily be hiding copying and so forth as well, though D3D’s design makes this somewhat difficult.

Hi Cass,

Originally posted by cass:
Hi Dirk,

We talked about this, and one idea was to have a dummy binding point, just used for modifying objects but was not rendering state. Something like

ActiveTexture( DUMMY );
BindTexture(…);
GenerateMipmap(…);

This binding texture objects to the “DUMMY” slot would be cheap. I’m ambivalent about this approach though. I like it because it avoids modifying render state just to modify an object, but it is does nothing to solve the problem of too much indirection in the texture API.

Thanks -
Cass[/QB]
Hmm, ok, that would work, too. What were the feelings about this, is there a chance this would get added?

Admitted, it’s not the cleanest solution, but usable. I suppose the main reason for not honoring the automatic mipmap generation was the problem of defining when to do it. Wouldn’t it be cleaner to define either implicit or explicit flush conditions, and regenerate the mipmaps at those points? You could use a glFlush or glFinish when the Drawable is not bound to FRAMEBUFFER, or the calls to RenderTarget and Drawable.

Thanks

Dirk

I think that glGenerateMipmaps() is redundant. It’s much better and opengl’ish to reuse SGIS_generate_mipmap texture property and regenerate mipmaps of higher level on glFinish().

If, for some reason someone needs glGenerateMipmaps() call it’s better to add integer argument to indicate which mipmap levels should be generated.

But my in strong opinion it’s MUCH better to reuse SGIS_generate_mipmap and define glFinish/glFlush behaviour for rendering to texture.


oops, it seems to be already answered:

Also, please, relax dimensions constraint. As it was said in DirectX it is already possible to use one big Z-buffer for rendering to textures of different sizes. It means that hardware already can do it. If ARB want to support some hardware, which can’t do such trick, it will be good if this constraint will be removed by some extension immediately available with EXT_render_target.

Thanks,
Dmitry.

Waiting impatiently for first implementations.

Originally posted by c0ff:
I think that glGenerateMipmaps() is redundant. It’s much better and opengl’ish to reuse SGIS_generate_mipmap texture property and regenerate mipmaps of higher level on glFinish().

Isn’t mipmap generation a part of OpenGL since 1.3?