Render behind existing image

I’m trying to render opengl primitives into transparent areas of an existing image (on an iphone), using the following blending function…


    glEnable(GL_BLEND);
    glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE);
    glDrawElements(...);

However, it always renders over the existing image.

Here’s the destination image creation…


    texture_preview = [[Texture2D alloc] initWithData:0 
                                  pixelFormat:kTexture2DPixelFormat_RGBA8888
                                   pixelsWide:300
                                   pixelsHigh:300
                                  contentSize:CGSizeMake(300, 300)];
    glGenFramebuffersOES(1, &texturePreviewFrameBuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, texturePreviewFrameBuffer);        
    glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture_preview.name, 0);
    glClearColor(1.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT);        
    glViewport(0, 0, texture_preview.contentSize.width, texture_preview.contentSize.height);

Source and destination alpha seems to be working fine, because I can render partially transparent objects, and they are see-through to previously rendered objects, and the alpha channel is correctly rendered and displayed in the output image.

Any ideas? Do I need to do something else to enable destination alpha processing for the renderer?

You need to render them in back to front order.

Unfortunately, I can’t. This is not 3d. I’m rendering 2d textures into an existing 2d image, and I need to render into the transparent parts of tho original.

Polygon antialiasing is recommended to render front to back, so there must be a way to get dest alpha working…

Not sure to understand everything from what you said.

You can try to render your 3D objects first into an fbo / texture. Then render this with a quad covering all the screen (I guess just like you render your texture), then another quad over it with your texture.

Yes, I thought of this, but I’d rather not have to load the full image into a texture just to render it back on top. Iphone is a bit limited in graphics memory and texture size limits, and I’m working with a BIG image, larger than the texture limit, so I’d have to break it in pieces, etc. Lots of mucking around, and it’s going to be terrible performance loading textures and re-rendering the entire image every time.

Surely the destination alpha should work to solve this problem…or not? Can anyone confirm whether dest alpha is supposed to work like I want it to?

If you’re able to use fragment shaders emulating the old alpha test using the discard directive would be an option. This way fragments with an alpha value below a certain threshold can be discarded, thus not updating the depth buffer.

When drawing the second textured object using the exact same z-coordinates and depth testing enabled, only the portions not occluded by the first object will be drawn.

Could you post some screen shots of what you got up to now?

Yes, I thought of this too, but I need partial alpha testing. The original image has partial transparency.

All of my rendering is 2d textures, so there is no depth buffer - the original image to render behind just has an 8-bit alpha channel, with some areas partially transparent.

Hmm, I’ve just realised that I could possibly use a 1-bit mask channel after all. I think I can fudge it to remove the need for an alpha gradient. It won’t be quite as neat, but maybe it could work if I process the alpha channel into a mask channel.

I’ll give this a go. Thanks for the inspiration.

AAAHHAAAA! (Sheepish grin)

DST_ALPHA does work after all. I discovered today that I was rendering twice in my code. The render behind was working perfectly, and then I was rendering again over the top. Duh!

Thanks for your help and suggestions.