glAlphaFunc

I’m trying to use the AlphaFunc to render atextured quad some parts fully visible, others full invisible.

the texture is and RBGA texture all bit values. The problem is the entire quad is visible. If I use glColor to specify 1.0 1.0 1.0 0.5. And the glAlphaFunc is called with GL_GREATER and 0.5 then nothing is drawn.

I could be specifying the alpha bits in the texture wrongly, should the bits I want visible be 0 and those invisible 255. This is what I have at the moment, of course using GL_LESS instead of GL_GREATER should offset any problems this would cause.

I’m new to the AlphaFunc so I could have left out something really basic.

Any help is appreciated

Zadkiel

First, when you specify GL_GREATER with referece 0.5, all fragments with alpha greater than 0.5 will pass. If you set alpha to 0.5, no fragments will have alpha greater than 0.5 (but rather equal to), therefore thay are rejected.

Second, alpha of 0 is totaly transparent, alpha 255 is totaly opaque. This is how people usually interpretate the alpha value, but it’s up to you to use it in another way.

A general way to make partly transparent textures, you can set alpha values to 0 (transparent) or 255 (opaque) only, and use glAlphaFunc(GL_GREATER, 0.5f) to reject transparent fragments. You shouldn’t use glColor in that case, it ca screw up the alpha function if you really don’t know whats going on.

something for the driver guys, when u have a texture(sprite) like that where the background is transparent ie alpha 0 and the rest alpha 1. what is the quickest alpha test
i assume GL_NOTEQUAL 0, or does it not make any difference

On CPUs, every comparison is equally fast (and a CMP instruction actually does all possible comparisons at once and sets some flags for later use) so i guess it’s the same for graphics chips.

Just in case anyone’s wondering, my problem was in the creation of the texture, While I had set it to load GL_RGBA I had left it creating GL_RGB.

Stupid mistake.
And just a comment on the comparision using GL_NOTEQUAL, 0 would actually be the fastest way, as there would be no need to involve the FPU but the drivers would have to be very specifically optimised for it, they most likly aren’t

[This message has been edited by Zadkiel (edited 05-18-2001).]

@Zadkiel:
FPU? Alpha Test is done by the graphics card, and I’m absolutely sure that it’s a one-cycle operation in the pixel pipeline (just like z compare). There’s no way to do it faster.

But even for CPU/FPU, your assumption is wrong. This “optimization” would require the driver to check if this value is zero, i.e. another comparison :wink:
afaik, on modern CPUs/FPUs a comparison always takes one clock cycle (plus register loading), so there’s no way/need to optimize it.

“on modern CPUs/FPUs a comparison always takes one clock cycle”
Actually floating point comparision take place in the FPU and there’s alot of clock cycles involved.

However a check to see if a value is zero only requires a single AND with 0FFFFFFFFh and the CPU flags are set, there’s even no need for register loading.

This is the case because weither floating point or integer the binary representation for zero is 0.

…but, as you said these things take place on the 3d card and I have no idea how those CPUs (or is it GPU now) actually work, maybe there all floating point centered.