MRT, render to texture

Right now, I’m just praying that FBO allows this to be handled efficiently.
That’s why I didn’t like barthold’s language in describing FBO as

EXT_framebuffer_object offers pbuffer functionality in core OpenGL, where the most important application will likely be render-to-texture. This, for example, means only a single context will be needed to render to many textures.
This can easily be interpreted as, “EXT_FBO is like pbuffers, only without the context switch,” which means that the ARB has failed yet again. However, I would hope that, after 2 years, they understand what it is we want out of RTT, and that they will give it to us.

Why can’t you texture from all four AUX buffers at once? I don’t see any problem there – you just bind each one individually to different texture units.
I didn’t know it was possible to bind it that way. I thought binding any of the color buffers meant the entire p-buffer is locked up.

As for the performance, it is still bad but running it from the debugger causes it to slow down quite a bit more. I think I heard of this problem before.

Eric, so why don’t you render to something else like I’m going to do.
Bind your front/back buffer and your AUX0 buffer. For your depth, copy the depth to another aux (AUX1) buffer. Then with these 3 textures, render to another p-buffer.
Would this work for you?

MRT example:

Yeah, it’s fairly complex, but it does show how to do it.

Eric, I am doing 32bit float blending using FRONT and two AUX buffers of the same pbuffer. I bind two of the buffers (AUXx and FRONT) as textures and render to the next AUX buffer. It works fine although the spec says that the result is undefined:

If any of the pbuffer's color buffers are bound to a texture, then 
rendering results are undefined for all color buffers of the pbuffer. 

I don’t know why there is such a restriction.

I’ve also tried to bind DEPTH/AUXx/FRONT/BACK buffers as textures from different pbuffers and to render to another pbuffer at the same time. It is also working. But I am not sure if binding a depth buffer as texture and using the same depth buffer for depth testing concurrently may work. May be if glDepthMask is GL_FALSE.

You can try to use frame sized quad rendering with depth buffer bound as texture instead of glCopyPixels. It works ok in my case (<1 msec as I remember). Though I’ve not tested if glCopyPixels works faster.

By the way I am using nv 6800gt, driver version 66.81.

I’m not sure why some of those restrictions exist either. As you point out, you get the behavior that makes sense even though it is undefined. Unfortunately, relying on undefined behavior that happens to work right now is dangerous. Any time a new driver comes out, it might suddenly stop working the way you want it to. Getting the same behavior across different hardware vendors is also very difficult. I need to be writing code that will still work five years from now without any intervention from myself, and unfortunately that means relying on explicitly defined behavior only.

Spec is very confusing:

From WGL_render_texture:


If <hPbuffer> is the calling thread’s current drawable, wglBindTexImageARB performs an implicit glFlush.

After this function is called, the pbuffer associated with <iBuffer> is no longer available for reading or writing.

(Is this mean that we can bind for example Back left and AUX1 and then render to AUX2 in same pbuffer, because we bind only Back-left and AUX1 but AUX2 are not binded!?)

Any read operation, such as glReadPixels, which reads values from any of the pbuffer’s color buffers or ancillary buffers, will produce indeterminate results. In addition, any draw operation that is done to the pbuffer prior to wglReleaseTexImageARB being called, produces indeterminant results.


The spec is a bit confusing, but it was written before multiple render targets were even dreamt of. Some of the restrictions were intended to prevent you from reading from a buffer that you were rendering to at the same time. However, all shipping implementations should have no problem binding the different buffers of a pbuffer as different textures simultaneously.

I also hope that the limitation of only being able to use 3 or 4 channels when using RTT can be removed. Sometimes we don’t need all of those channels, like when doing shadow mapping, only a single floating point channel is enough. Ultimately, ARB should think out a new render target solution, rather than adding refinement to the existing pbuffer mechanism. If I understand it correctly, pbuffer is designed to be a clone of framebuffer, that’s where the many existing limitations came from.