I render a quad with per sample shading (16MSAA).
The quad has a texture (16MSAA)
The pixel and texel alignment is of by:
I would like to copy the correct sample from the texure (sample accessed via nearest neighbor by searching through the sample positions - and ignoring the nonexistent 1-1 mapping).
However, it appears to me that the sample positions have no actual meaning for the alpha values.
An implementation confirms this problem.
Is there some information explaining the rendering with a2c and the order of samples.
I am glad to elaborate further, once I have a better unterstanding.
Alpha-to-coverage is described in §17.3.3 of the OpenGL 4.5 specification:
If SAMPLE_ALPHA_TO_COVERAGE is enabled, a temporary coverage value is generated where each bit is determined by the alpha value at the corresponding sample location. The temporary coverage value is then ANDed with the fragment coverage value to generate a new fragment coverage value. If the fragment shader outputs an integer to color number zero, index zero when not rendering to an integer format, the coverage value is undefined.
No specific algorithm is required for converting the sample alpha values to a temporary coverage value. It is intended that the number of 1’s in the temporary coverage be proportional to the set of alpha values for the fragment, with all 1’s corresponding to the maximum of all alpha values, and all 0’s corresponding to all alpha values being 0. The alpha values used to generate a coverage value are clamped to the range [0; 1]. It is also intended that the algorithm be pseudo-random in nature, to avoid image artifacts due to regular coverage sample locations. The algorithm can and probably should be different at different pixel locations. If it does differ, it should be defined relative to window, not screen, coordinates, so that rendering results are invariant with respect to window position.
Thank you for answering.
I want to pose a related question.
I have several transparent layers, windows behind windows.
And I intend to render them using a fixed alpha value (.3).
And I begin to appreciate the documentation: If the alpha value is ANDed, then several layers of the same value will remain constant .3.
What I would like is ADD.
Is something like that possible or does that contradict the use of a2c
Alpha values aren’t ANDed; coverage masks are ANDed. But multiple layers with the same value may be problematic if the mask generation is deterministic. The specification suggests (but doesn’t require) that the mapping between alpha and mask changes per-pixel, but doesn’t mention anything about changing for the same pixel in different primitives.
What I would like is ADD.
Is something like that possible or does that contradict the use of a2c[/QUOTE]
It sounds like you’d be better off just using blending. Alpha-to-coverage inherently implements overlay blending, i.e. the equivalent of glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
The main advantage of alpha-to-coverage is that you don’t have to render the polygons in depth order; you can rely upon depth buffering. If you don’t need that, there isn’t much reason to use it.
Alpha-to-coverage inherently implements overlay blending
That was my initial motivation but I observe differently.
It appears to me, that it doesn’t implement over blending.
The values are constant. There is no visible overlap occluded objects are just not visible.
That is part of my confusion
Have you tried using different alpha values?
If the generation of coverage masks is deterministic, then overlaying multiple primitives with the same constant alpha will use the same mask for each primitive, meaning that you’ll get the closest primitive overlay-blended onto the opaque background; the intermediate primitives will be completely occluded.
It isn’t clear exactly what you’re trying to achieve. If you don’t actually need alpha-to-coverage, you may be better off either using blending, or using a fixed mask with glEnable(GL_SAMPLE_MASK) and glSampleMaski().