hi, I use glTexSubImage2D to replace a sub-region of a big texture, but I don’t want the sub-image area whose alpha is 0.0f to replace the big texture region, just like doing a alpha test rendering in glTexSubImage2D.
So can I or how can I do that,thanks advanced.
So, what do you want to do ? I’ve understood that you’d like to ‘add’ your sub texture to your big original texture, wouldn’t you ?
I think multitexturing fits best for that purpose.
I think he wants to use glCopyTexSubImage2D to copy a region of the frame buffer to texture but he only wants to copy the pixels who’s alpha is not 0.0.
glCopyTexSubImage2D is simply a copy routine, the alpha test does not affect it.
I suppose you could copy the sub region to a second texture then render both textures (with screen aligned quads) and blend them together.
Or render the current texture over your scene and blend it before you do the copy.
Thanks for the reply and sorry for vagueness of the question.
Yes, I want to ‘add’ my sub texture to a big original texture.
For example, I have a 256 * 256 texture for a game creature, and I want to use a sub texture, say 64 * 64, to replace a region(0,0,64,64) of that 256 * 256 texture.
But the 64 * 64 sub texture has irregular complete transparent(alpha=0) parts, so I want to filter it.
I don’t want to use multitexturing, because it costs much in glBindTexture, but I know the finally made 256 * 256 texture is constant in game.
glCopyTexSubImage2D copys the data from frame buffer, I want to use glTexSubImage2D to copy data from my sub image privided.
In fact, currently I have to complete it in a stupid way.
1.Use glGetTexImage to get image data from the sub-region(0,0,64,64) of the 256 * 256 texture.
2.Use my sub image data(64*64) to replace the data(got from glGetTexImage) pixel by pixel in case there is a alpha = 0.
3.Then use glTexSubImage2D to write back the data into 256 * 256 texture.
You could draw a quad over the region that you want to copy over the texture, using the texture and blend it using an appropriate blending equation, for example GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA. Then you can use glCopyTexSubImage2D to just copy the result to the texture…
Either way, you have to render something to use the alpha test. The general method for statically combining textures is to draw what you want to be the result using multitexturing and blending, either to the framebuffer and then copying with glCopyTexSubImage, or directly drawing to a texture with FBO or pbuffers.
Basically it’s the same as your proposed method, but entirely on the GPU. Instead of reading the texture to ram, you draw it to the framebuffer, and instead of manually manipulating the texture in ram, you use the normal OpenGL pipeline…
This would be a useful operation, but it cannot be done in OpenGL. Alpha test is a raster op and you cannot use it to test “image pipeline” pixels.
Overmind’s suggestion is an excellent one, and will work nively provided you align your texture (the filter may be an issue so you may need to get pixel accurate and use a nearest filter). One alternative would be to use render to texture. This would let you blend and/or text rendered pixels as they contribute to the texture you draw to.