What I have:
- Alpha mask data, with alpha components that I want to use for a texture.
- Image data that frequently changes, that does not have any alpha components.
- A texture that I frequently update with the image data, using glTexSubImage2D; drawn onto a single, simple quad.
What I want to do:
- Draw the texture using the colors from the image data, with a transparency mask from the original alpha mask.
The image data is updated frequently (in this application, it’s frames from a video stream, and the alpha mask is sort of an overlay mask, except I’m drawing the textured quad in aribitrary orientations in 3D space, and it may even be obscured by other things, so I can’t just use simple overlays), and performance is important so I don’t want to have to manually interleave the image data and alpha mask before calling glTextSubImage2D. The alpha mask is constant the entire time.
Is there something I can do, to combine the alpha components from the alpha mask and the color components from the image? Is there some kind of texture layering or something that helps me with this?
Hopefully I explained that clearly;
P.S. Some of the parts of the image I want to mask out are not black; so I can’t use simple masking tricks that expect the image to have a black background.
If I understand your situation correctly, you could simply use a shader with both textures bound at once, and simply combine them in the shader.
Thanks for your reply.
I don’t know anything about shaders (which I guess is the missing piece in my puzzle) and, unfortunately, I don’t want to take too much time to learn about them right now because a deadline is fast approaching… is there a simpler way? Was it possible to do this before shaders came out?
Old style multitexture should do the trick.
I do not know the implentation details, but this should work :
tex0 = dynamic RGB texture, from video stream
tex1 = static RGBA mask, RGB as white and A as your alpha
Then using ( tex0 blend tex1 ) with classic blending should give the result you want. I think the default operator of the multitexture setup is blending so it should be fine.
Maybe this oldish sample can help you :
That’s kind of a cool looking example. If anybody is interested, to compile it on Linux I used the following Makefile:
SOURCES = underwater.c texload.c dino.c
CFLAGS = -W -Wall -pedantic
LIBS = -lglut
gcc $(CFLAGS) $(SOURCES) $(LIBS) -o $@
rm -f underwater *~
And also had to add the following to the beginning of the idle() function around line 396 of underwater.c, as it ran a bit too fast for me:
static int slowdown = 0;
if ((++slowdown)%10000) return;
/* Advance the caustic pattern. */
Anyways, multitexturing was exactly what I was looking for, it does what I want and is incredibly easy to use, I’ll try to post an example some day soon FWIW (my test code is way too ugly to make sense as an example). Basically, the default operator is blending, like you say, and all you have to do to set up is:
// bind and set up image texture; does not need alpha channel in internal format
// bind and set up alpha texture; GL_ALPHA internal, GL_ALPHA format.
The mask texture doesn’t need any color components at all, it can be entirely alpha values. Then when rendering:
glEnable(GL_TEXTURE_2D /* or whatever */);
And do glMultiTexCoord(GL_TEXTURE0 and GL_TEXTURE1, …) for the texture coordinates, and it all magically works.
Thanks for pointing me in the right direction!