Alternative to glBlitFramebuffer()

Hi,

Currently I resolve my multisample framebuffer with trivial code:

	glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );
	glBindFramebufferEXT( GL_READ_FRAMEBUFFER_EXT, ms_offscreen.fb );
	glBindFramebufferEXT( GL_DRAW_FRAMEBUFFER_EXT, offscreen.fb );
	if (offscreen.depth && ms_offscreen.depth)
	{
		glBlitFramebufferEXT( 0, 0, vpw, vph, 0, 0, vpw, vph, GL_DEPTH_BUFFER_BIT, GL_NEAREST );
		glError();
	}
	glBlitFramebufferEXT( 0, 0, vpw, vph, 0, 0, vpw, vph, GL_COLOR_BUFFER_BIT, GL_NEAREST );
	glBindFramebufferEXT( GL_READ_FRAMEBUFFER_EXT, 0 );
	glBindFramebufferEXT( GL_DRAW_FRAMEBUFFER_EXT, 0 );

Works exactly as expected on Nvidia cards but simply doesn’t work on ATI cards.

So, If I want to avoid using glBlitFramebuffer(), how can I resolve a multisample RGBA+Z framebuffer to a texture_2d RGBA+Z framebuffer?

What calls can I use to grab the multisample bits, do-the-right-thing wrt multisample/singlesample?

I’m totally stumped.

Thanks,
Adam

There is no alternative. If it’s not working on ATI cards, then either you’re doing something wrong or its a driver bug.

What ATI card do you have? Which driver version? What means “it simply doesn’t work”? Is it crashing, giving GL errors or is the resulting texture broken?

I am currently facing two problems when blitting (or resolving MSAA) depth values on ATI cards:
a) on a very old card (FireGLV5200, same chip as in RadeonX1900, driver 8.58), the depthvalues end up upside-down in the destination texture for MSAA resolve blits.
b) on a newer card (FireProV8750) with newest drivers, I get GL_INVALID_OPERATION when trying to blit between two FBOs. I still need to figure out why.

Both code parts work on NVidia, sigh :-/

There is no alternative. If it’s not working on ATI cards, then either you’re doing something wrong or its a driver bug. [/QUOTE]
Well, I’d advise you to first run this to ground, check for GL errors, and if you can’t find a working solution, post a short test program that illustrates the problem here and mail it to ATI devrel to open a bug report. But…

In truth, you can resolve a multisample RGBA framebuffer yourself with a shader. You’ll can find this in the archives, but just bind MSAA texture to shader sampler, bind the target downsample FBO as DrawBuffer, and use shader guts like this:


// Frag shader
uniform sampler2D msaa_tex ;
vec3 color = vec3( 0,0,0 );

for ( int sample = 0; sample < NUM_SAMPLES; sample++ )
  color += texelFetch( msaa_tex, texcoord, sample ).rgb;

gl_FragData[0] = vec4( color / NUM_SAMPLES, 1 );

With depth, there’s no meaningful way to resolve that, so do what you want there.

Just to fill in a touch of what DarkPhoton wrote:


uniform sampler2D msaa_tex ;

should probably be


uniform sampler2DMS  msaa_tex ;

and you’ll need to create multisample textures, the function of the beans to see is glTexImage2DMultisample. I am pretty sure that the spec says you can also make a depth and depth_stencil multisample texture too, so you can recover the depth values too I’d think.

But still, as Dark Photon suggests, check for GL errors and then if none, make a minimal test case program and send it off to ATI.

Lastly, I’d suggest using the non-EXT named functions for the FBO operations… it might be that for that a driver might treat glSomeFunctionEXT differently than glSomeFunction. ATI attempts to often follow the letter of the spec and if I remember correctly, there are a number of limitations in GL_EXT_framebuffer_object that are not at all the case for FBO’s of GL3.x (the one that pops out to me is multiple render targets where the color buffer types need to match in EXT but not in GL3).

Thanks for suggestions.

By “doesn’t work”, I mean I get random bits from the depth buffer blit and some ATI cards copy/resolve the color buffer, some don’t.

No GL errors whatsoever.

Dark Photon:
I can’t use a shader to do the blit because the Multisample buffer is a RenderBuffer, the destination is a Texture2D.

I’ve attached a 1 file GLUT testapp that creates a 512x512 window, renders into a multisample renderbuffer FBO, resolves into a single sample texture2d FBO, then displays the result; leftside is color, rightside is depth.

I get garbage on the depth side.

So from the responses generally, am I right in thinking others experience of glBlitFramebuffer() is that it just works on ATI cards? ie Nothing obviously brain-damaged I’m doing?

Adam

Thanks. Right! Missed that when I was coding-on-the-fly.

Then change it from an MSAA RenderBuffer to an MSAA texture. Then you can do it. You also might find that the Blit driver bug (if one exists) depends on whether you’re using a renderbuffer or texture, so I’d try MSAA texture anyway.

So from the responses generally, am I right in thinking others experience of glBlitFramebuffer() is that it just works on ATI cards? ie Nothing obviously brain-damaged I’m doing?

skynet’s reply indicated Blit between FBOs just flat “didn’t” work on newer ATI cards. Anyway, let’s wait for some folks with ATI cards to try your code.

Here’s an updated mainglut.cpp that compiles on Linux.

Nice one. Thanks.

As you’ll see from the code, this needs to compile for Mac and PC. Looks like Mac OpenGL SDK (10.5 or 10.6) doesn’t support MSAA Textures… Great.

Is there really no way of getting the color & depth from a RenderBuffer?

I have an update!

b) on a newer card (FireProV8750) with newest drivers, I get GL_INVALID_OPERATION when trying to blit between two FBOs. I still need to figure out why.

I just figured out why it failed in my case: I got GL_INVALID_OPERATION because the internal depthformat of source and destination FBO did not 100% match. Source was GL_DEPTH_COMPONENT24, destination was GL_DEPTH24_STENCIL8. The blit failed even though I just blitted the depthvalues.

On ATI glBlitFramebuffer is very sensitive in regards of matching depth/stencil formats.

Works for me - ATI Mobility Radeon 4570, both on Linux and Windows, catalyst 10.12

Btw. I remember during playing with glBlitFramebuffer and multisampled render targets I was getting different results on core profile and combatibility - using latter, in odd frames I had garbages.

You should be getting an error if attempt to blit from D24 to D24_S8:

Calling BlitFramebufferEXT will result in an INVALID_OPERATION
error if &lt;mask&gt; includes DEPTH_BUFFER_BIT or STENCIL_BUFFER_BIT
and the source and destination depth and stencil buffer formats do
not match.

Although the spec doesn’t make any attempt to define “match”. Does a window with 24 bits of depth and 8 bits of stencil match D24_S8? How do you know?

Does a window with 24 bits of depth and 8 bits of stencil match D24_S8? How do you know?

In theory, you cannot, because there is no way to ask the window framebuffer for its internal depth/stencil format.
In practice it seems to work to infer from GL_DEPTH_BITS 24 and GL_STENCIL_BITS 8 to GL_DEPTH24_STENCIL8, though.

And yeah, ‘format matching’ is by no means defined in the spec :frowning:


    11) Should blits be allowed between buffers of different bit sizes?

        Resolved: Yes, for color buffers only.  Attempting to blit
        between depth or stencil buffers of different size generates
        INVALID_OPERATION.

So i think it should be interpreted as a bug in ATI.
This is for general case though, when multisampling is involved spec is even more vague …

Why do you think it its a bug? The spec allows the implementation to throw INVALID_OPERATION if the depth/stencil formats of source and destination do not match.

Because ‘issue 11’ implies that ‘match’ means equal bitcount, which is the case for depth_24 and depth_24_stencil_8

But my post was about the piece of code I posted that compiles fine, runs without any errors yet fails to blit the depth buffer.

Can anyone point at some ATI written example showing how to resolve a MS FBO?
Perhaps it will show some secret requirements.

Lastly, where does one post code to demonstrate bugs in AMD? Is there somebody I should contact?

Yes, I agree ‘issue 11’ is wrt to pixel bitcount and not implying MS to single sample is only supported for Color buffers.

In theory, you cannot, because there is no way to ask the window framebuffer for its internal depth/stencil format.

Of course there is. You can’t ask for the image format OpenGL name, but you can ask for the bitdepth of the various components.

Because ‘issue 11’ implies that ‘match’ means equal bitcount, which is the case for depth_24 and depth_24_stencil_8

24 + 8 != 24.

It’s generally safer to use the safe alternative and make everything match perfectly.

This post describes:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=267385#Post267385

if you want to report an opengl issue to AMD, you can use one of the two following:

regards,

Pierre B.
AMD Fellow