can't figure out alpha blitting

So, imagine i created a FBO, with RGBA format (has alpha channel).
I want to write values to the alpha component of it from a fragment program, however wathever i write to it using gl_FragData[], ends up being “1” (opaque)… so i’m thinking it must be the blend mode… i want it to write my arbitrary value to the alpha channel from the fragment program… so, what blend mode should i use so instead of writing 1.0 always to alpha, it just overwrites alpha with what i write in gl_FragData??

thanks

I never had that problem, alpha was there and with the computed value. No blending involved.
nVidia cards.

how did you create the context? has it alpha channel enabled?

It seems i didn’t made the question clear, question is:

Hpw do you set up blending so what I want to achieve happens?
I tried disabling GL_BLEND, and enabling GL_BLEND with glBlendFunc(GL_ONE,GL_ZERO), but nothing happens, alpha is always 1 in the destination buffer no matter what value i write to it from the fragment program.

As already said, blending has nothing to do with this, just disable it.
Have a look at the FBO creation, apparently is was not created with an alpha channel. Even, without shader, you can try glClear it with a glCLearColor(,0.5), check you see this 0.5 alpha value.

this should make you think that blending is not the cause of your problem, as Ilian already told :slight_smile:

can you post a snippet of code? i think the problem is in how you create the context (eg. if you use glut i want to see your glutInitDisplayMode parameters… )

I’m using x11/glx directly, but does the way the context is created matter? as in, this all happens when rendering to a FBO, not the regular framebuffer…

FBO creation code:


glGenFramebuffersEXT(1, &data_fbo.fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, data_fbo.fbo);

glGenTextures(1, &data_fbo.data);

glBindTexture(GL_TEXTURE_2D, data_fbo.data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB,  data_fbo.width, data_fbo.height, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, data_fbo.data, 0);

then the fragment program that writes to the FBO does:


gl_FragData[0] = vec4( 0.5,0.8,0.5, 0.1 );

then finally, for testing what was written to the FBO, i run a fragment program that binds back the normal framebuffer, and binds the FBO texture (data_fbo.data) as regular texture, and tried the following code (writing a fullscreen quad)

vec4 data = texture2D( data_fbo_tex, gl_TexCoord[0].st );
gl_FragColor.rgb = data.rgb;

which works flawlessly (i see the color 0.5, 0.8, 0.5)
but if instead, i write this line:

gl_FragColor.rgb = vec(1,1,1) * data.a;

Everything is white, instead of color 0.1,0.1,0.1, which is what i expect, given it’s what i saved into the alpha component.

yes, you don’t use the regular framebuffer, but the GL context is the same!

if you do not enable the alpha channel in the context, you cannot write to it.

note that normally you don’t need to have it enabled because your window is not transparent, and what you see is the result of blending. when you use an fbo, though, you need to enable it explicitly because no blending occours, you really need alpha on the context.

I create the context with the following values:

GLX_RGBA,
GLX_DOUBLEBUFFER,
GLX_RED_SIZE, 4,
GLX_GREEN_SIZE, 4,
GLX_BLUE_SIZE, 4,
GLX_DEPTH_SIZE, 24,
None

Should i enable alpha somewhere else?

ok, maybe i’m wrong, i’m not experienced with glx, but i observed a similar behaviour using glut or qt. for example in glut (windows) i need to enable the alpha channel even in the window draw buffer to make it work in FBO buffer too during shader output.

hmm doesn’t seem to be there either, so it seems the FBO actually is lacking alpha?

Well, how do you do this in glut or qt, so i try to make an idea (look at their source code) on how to do it on GLX ?

in GLUT:

glutInitDisplayMode( GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGBA | GLUT_ALPHA );

in QT:

QGLFormat format = QGLFormat( QGL::DoubleBuffer | QGL::Rgba | QGL::DirectRendering | QGL::AlphaChannel | QGL::NoSampleBuffers );

and pass it to the constructor of QGLWidget

4 bits seem strange, try 8, and add alpha:

GLX_RGBA,
GLX_DOUBLEBUFFER,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,

value of 4 is totally weird. makes up 12 bits per pixel? im surprised it even got running.

Don’t forget to unmask alpha write. I mean, be sure that you have called glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE)

voila! That fixed it! It seems i needed to set alpha in the mainf framebuffer, otherwise i won’t have alpha in the FBOs… weird…

thanks a lot!

That’s definitely not true. The window’s pixel format does not influence the format of FBO attachments. Double check GL_ALPHA_BITS with and without an FBO bound, and also the internal format of your FBO attachments.

I understand that the format isn’t the same, but it seems if i don’t define alpha bits at all my FBOs don’t have alpha channel created on them. Maybe this is a driver problem from nvidia?