merging RGB with separate A

Sorry, this must have been asked a million times before, but I just can’t seem to find the correct search term in google.

I have a RGB image, and a separate Alpha, how do I merge these two into one RGBA texture?

Also, is it possible to load a RGB image with a fixed alpha of my choice rather than the default 1 as stated in the man pages?

Thx,

Joe.

This is more of a basic programming problem than of an opengl one. You give opengl your custom image data in glTexImage*. If you need to combine rgb and alpha, surely you can do that yourself.

i forgot to mention, the whole point of this combination is to do it REALLY fast.
By altering the source data to include alpha in software (which is what i think ur suggesting), it would kinda make the whole thing uselessly slow…

What i’m asking is does the ogl api address this by allowing loading of RGB data separate to loading of alpha values?

Thanks for the quick reply

He’s saying that you need to set this up in memory and then call glTexImage2d with the apropriate parameters. If you want an RGBA texture, you’ll need to pass an array that contains 4 bytes per pixel instead of 3.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 
		512, 0, GL_RGBA, GL_UNSIGNED_BYTE, bmp);

I haven’t tried but I think it is possible to load a different alpha (or a color channel) into an existing texture. You would pass an array with 1 byte per pixel like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 
		512, 0, GL_ALPHA, GL_UNSIGNED_BYTE, bmp);

yep, i’m aware of that method… however, to modify an array of size 256256x3(RGB in bytes) and add in a dymaically changing alpha map of 256256, 10 times per second on the CPU may be quite slow…?

So what I was wondering is if there is fixed functionality for this kind of operation in opengl where you can (generally speaking) combine two texture memory holding RGB and A separately into one texture with RGBA.

I hope i’m making myself clearer and not more confusing. Basically i’m asking how do i implement your solution using opengl and graphics memory only.

ah yes, your second post looks most promissing :slight_smile:
i’ll give that a go

<edit> unfortunately, a quick look at the man pages says it all:
GL_ALPHA Each element is a single alpha component. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for red, green, and blue.

so that would erase the previous image… :frowning:

Thanks for the quick reply!

Joe.

Originally posted by mordrax:
yep, i’m aware of that method… however, to modify an array of size 256256x3(RGB in bytes) and add in a dymaically changing alpha map of 256256, 10 times per second on the CPU may be quite slow…?

Have you tried that? Unless you will do something
really wrong the speed by which you can on decent CPU interlave RGB array with the A array to form RGBA will be likely limited by speed by which can CPU read data from the memory and write them back to the memory.


So what I was wondering is if there is fixed functionality for this kind of operation in opengl where you can (generally speaking) combine two texture memory holding RGB and A separately into one texture with RGBA.

If you need to have result stored in single texture you can use some form of render to texture functionality (pbuffers,fbo or simple glCopyTexSubImage2D from framebuffer) together with multitexturing (or fragment programs) to combine RGB and A together.

If there is no need to have result stored in texture and you have sufficiently capable hw you can use the multitexturing (or fragment programs) to combine both textures during rendering in which you would otherwise use the combined texture.

Are you wanting to have a prerendered alpha animation? I think this would be the quickest method using a multitexture like Komat suggested. You can make several textures with an internal format of just GL_ALPHA and use them with a GL_RGB texture. I don’t know for sure if GL_RGB and GL_ALPHA can be combined in a multitexture but two GL_RGBA textures would function just as well.

If you are wanting to create the alpha channel in real time on the cpu and upload, then maybe glTexSubImage won’t overwrite the RGB channels like glTexImage would like you said.

If there is no need to have result stored in texture and you have sufficiently capable hw you can use the multitexturing (or fragment programs) to combine both textures during rendering in which you would otherwise use the combined texture.

unfortunately, that is what i was trying to avoid. I can load both and then combine them, but that is two texture fetches, given that i have 5 of these such pairs, that’s 10 fetches.

however, i can optimise this by putting 4 alphas in one rgba texture to minimise texture fetches.
The alphas are used as kinda like a dynamic occlusion map which will change 10+ times per second. that’s why i needed everything to be done on graphics memory, once loaded, it will continually render to texture and repeat.

if there was a simple way to combine the “occusion map” to the actual texture as an aplha, then i can reduce the amount of fetches and hopefully make this faster :slight_smile:

Joe.

Originally posted by mordrax:

if there was a simple way to combine the “occusion map” to the actual texture as an aplha, then i can reduce the amount of fetches and hopefully make this faster :slight_smile:

If you already have render to texture implemented in your engine you can create the combined texture by using multitexturing setup that will take rgb from one texture and alpha from the other.
If the occlusion map is also generated on the card by render to texture then you may try to incorporate that texture combination directly in this occlusion map render.