masking in fbo with Alpha Blending


I am confronted with following scenario:

My framebuffer contains already the “background” of my scene and should not be changed for now.

Over this background, I want to blend a semi-transparent textured geometry. As an example you can think of a partly transparent courtain.

Now comes the “masking”:

I want to implement a “cut” tool that can cut the courtain.

this cutting shall be done by drawing (per mouse) into a greyvalue image which is used as mask for the alpha channel of the courtain.
Black means invisible, white means no change (and intermediate values of eg 50% mean 50% opaque.

In practice I would plan to do as follows:

  1. get framebufferobject
    1.1 render mask with glbendfunc(?,?)
    1.2 render geometry with glblendfunc(?,?)
    2.end fbo
  2. render fbo texture into normal framebuffer

Am I on the right path?

Masking with alpha? Have you looked at the alpha test?

It can discard fragments based on an alpha threshold.

See spec 2.1 section 4.1.4 Alpha test, page 204.


glEnable(GL_ALPHA_TEST); // use it.

// get rid of any fragment for which the alpha is under 0.4
// ie keep any fragment for which the alpha is greater or equal
// to 0.4

Note that it has been deprecated in OpenGL 3.0 because you can easily implement this functionality within a fragment shader:


see Spec 3.0 or 3.1, appendix E The Deprecation Model.

It looks like this does not need fbo or even masking.
Either trough fixed path multitexture combine, or through a GLSL shader, multiply alpha of RGBA texture by luminance from masking texture.
in glsl :

vec4 color = texture2D( geomTex, TexCoord);
float mask = texture2D( maskTex, TexCoord).r; // depends, use .a if it is a pure alpha texture
color.a = color.a * mask;
gl_FragColor = color;

then classic blending will take care to fade out transparent areas.

Thanks for your hints.
Currently I am looking into the multitexture direction.
In that way I can maximise the compatibility of my app to older graphics hardware…

One question, since it is really difficult for me to debug my code, and to see where the errro happens,
if my cut mask is a float array with values in [0,1], then the following calls should create a alpha texture, shouldn"t they?

void updateCutTexture()
	if(m_gl_cut_tx_id == 0)
		glGenTextures(1, &m_gl_cut_tx_id);
		glBindTexture(GL_TEXTURE_2D, m_gl_cut_tx_id);
		glTexImage2D( GL_TEXTURE_2D, 0, GL_ALPHA, m_sx, m_sy,
						0, GL_ALPHA, GL_FLOAT, m_cutmask);
		glBindTexture(GL_TEXTURE_2D, m_gl_cut_tx_id);
		glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_sx, m_sy,
				GL_ALPHA, GL_FLOAT, m_cutmask);

Note that the multitexturing parts are not implemented yet…(sure I will have to pose questions here :slight_smile:
Thanks again,

Looks correct.

just a quick note, GL_ALPHA as internalFormat is in fact unsigned byte, so only integer values in [O;255]. As you send an array of GL_FLOAT values, it means the gl driver will have to convert the data on the cpu before sending it to gpu.
So if you can directly generate your mask as an array of unsigned bytes, this conversion step is avoided and it will provide some performance gain at each updateCutTexture() call.

I am a bit rusty with legacy multitexture, but as the default texEnv is GL_MODULATE, it should be enough to set texture 0 as your geomTex, and texture 1 as maskTex. Both alpha channels will be multiplied.



This rocks! Thank you very much!

The only thing I additionally had to do was to use the glMultiTexCoord versions for defining the texture coordinates.