Change FBO attachment on-the-fly

I read an article that explains they can change the color attachment of the current FBO inside the geometric shader.

They update a 3D texture sending only 1 point. They attach 1 slice of the 3D texture to the FBO. Then they send 1 point to the GPU. Inside the geometric shader, the point is replaced with N quads, and before send each quad the slice of the 3D texture attached to the FBO is changed.

They use Direct X, is it possible to implement? And with OpenGL?



This is not possible in OpenGL and i am 99.9999% sure this is not possible in DX either. I am most certain you misunderstood some point in their pipeline. Maybe you should post a link to that article, such that people can have a look at it and give you better feedback.

To be more specific: You can have up to 8 color render-targets. So it might be possible, that they update up to 8 different slices per pass (and the GS generates 8 quads to render into the next 8 slices), but it is not really possible to do this with n slices in one pass. However, with so few information, i can really only guess about the whole thing.


It is not possible to change the color attachment of the FBO to any other texture ID inside the geometry shader. However you can choose the slice (or cubemap face or layer of the texture array) you want to render to from the already set up color attachment of your FBO, and you can do that several times in the same instance of the shader. So yes, it is possible to fill in the entire 3D texture by rendering only 1 point with the proper geometry shader.

From GL_ARB_geometry_shader4:

Layered rendering
Geometry shaders can be used to render to one of several different layers
of cube map textures, three-dimensional textures, plus one- dimensional
and two-dimensional texture arrays. This functionality allows an
application to bind an entire "complex" texture to a framebuffer object,
and render primitives to arbitrary layers computed at run time. For
example, this mechanism can be used to project and render a scene onto all
six faces of a cubemap texture in one pass. The layer to render to is
specified by writing to the built-in output variable gl_layer. Layered
rendering requires the use of framebuffer objects. Refer to the section
'Dependencies on EXT_framebuffer_object' for details.

EDIT: I’ve just noticed a weird thing in the extension specification: This extension interacts with ARB_tranform_feedback. There is no such extension in the registry…

use the GL_EXT_transform_feedback:

I have no interest in using it. I was trying to emphasize the fact that there is probably a typo in the spec (ARB->EXT).

I will try, is there any example?

Nvidia has demos for every extension they support. Check the SDK samples page.

For performance reasons you probably don’t want to use the geometry shader. Better approach for variably writing to different layers of a 3D texture per primitive would be to emulate the 3D texture with a 2D texture. So 2D texture contains the layers of the 3D texture as different logical regions of one 2D texture.

Sure Timothy Farrar, but later I need a 3D texture to visualize correctly using trilinear interpolation. The performace decreases a lot?

The alternative that I thought is to do N calls to a function that attach one slice of the 3D texture and then renders a pointsprite. Which is better, change the slice before the rendering or inside the geometric shader?