I’m interested in what happens to the alpha test reference value and an alpha value from texture when they have to be compared. For example, if texture format has 8 bit alpha (integer) and I provide some alpha function and reference value (as float), then somewhere there must be either float->integer conversion or vice versa to actually compare them. glAlphaFunc documentation unfortunately doesn’t cover this in detail.
If I want to test for e.g. equality to a certain integer alpha value, is there a general and reliable way to do so? I mean something like glAlphaFunc(GL_EQUAL, intval / 255.0f). Only that sometimes suffers from what I think is float<->int conversion precision issues in compiler or driver (it doesn’t always work as expected). In the particular case where this didn’t work, that involved GL_GEQUAL, adding a very small value (like 0.000001) after dividing by 255 did help. But I’m looking for some general and portable way of solving the problem, if any exist.
Comparing floating point values with “==” is bad on any processor!
You could do it inside a shader and use an interval test with an environment around your value which is small enough to reliably find your intval / 255.0f and big enough to not miss it.
If things like GL_EQUAL are there at all, there got to be some way to use them reliably? Or they are mostly useless (at least in apps that must be portable)? I wonder if this kind of conversions inside OpenGL have to conform to something. If there is a standart then I can try to tweak compiler options or even write a custom conversion function to get exactly the value needed. Unfortunately, so far I’ve a bad feeling that this isn’t defined, as this works on some implementations (software Mesa), but doesn’t on others.
I realize it may be not the best approach from OGL viewpoint. Unfortunately that’s what I need to do if possible. It’s for fixing a problem in Wine emulator (winehq.org) that does d3d->opengl translation for games. So I can’t change the algorithm (it’s decided by game).
The specification defines (sections 4.1.4 and 2.14.9 of the OGL2.1 specification) how should be the float value passed to the AlphaFunc converted to fixed point value and how should be fragments alpha value rounded for comparison purposes. Of course it is always possible that some hw or driver does not implement some detail in the way described by the specification.