Alpha to coverage: how does it actualy work?

So trying to tackle on how to render a model’s hair I obviously hit the wall that transparencies are. Eventualy I implemented sorting on the polygons related to the hair and used pre multiplied alpha blending, for quite decent results.

However, we are talking about hair strips that sum up to over 7000 triangles, which happen to be severely intertwined. I was trying to consider alternative aproaches (like spliting the mesh in hair strip sections to accelerate the sorting, etc) when I saw the term “alpha to coverage” mentioned and the magic words “no z sorting” close to it. So I went out to hunt this holy grail.

Oddly enough, there’s few mentions to how to actualy set it up on GL… and it couldn’t be any easier, really. So I did, and indeed the effect it causes on the hair is basicly perfect. Ofcourse, the hair is aparently a perfect case for alpha to coverage, since it doesn’t really need blending but just alpha testing and the added antialiasing simply makes it look great.

Alas, I cannot just be happy with seeing it work. I need to understand why it works.

So from what I have gathered (and it’s been a mess to actualy find good information on this) ATC works by using the alpha value written by the shader output to determine which samples within the Multi Sample pixel the output actualy covers, thus the name. So far so good. Less alpha value, less samples from the MS pixel that are marked as covered.

But what from here? How does this actualy achieve a correct effect without needing to Z sort all these 7000 triangles?

I believe that perhaps what I’m actualy asking is how MSAA works, as I only have a superficial understanding about this too.

If anyone can spare some time to ilustrate me, I’ll be quite grateful.

Thank you in advance.

But what from here? How does this actualy achieve a correct effect without needing to Z sort all these 7000 triangles?

Because it’s effectively alpha-testing.

Supersampling is a technique where you essentially render at a higher resolution, then scale the result down. Multisampling is an optimization of that technique. The issue with supersampling is that the values computed from the interior of polygons will be the same as the non-antialiased version (for the most part), because texture filtering methods are far more effective at antialiasing texture accesses.

So instead of rendering at a higher resolution, multisampling renders at the normal resolution. However, when it goes to write the actual value, it writes multiple values, based on the coverage of that particular area of the pixel. If the fragment covers 3 of the 8 samples, then it writes the fragment’s value to 3 of the sample values.

After all rendering is done, the multisample buffers are then compacted down into single-sample for display. The values are averaged together.

So any one pixel in the final image is a composite from multiple samples. These samples can be pulled from different fragments that contributed to the coverage of that particular pixel area.

The depth buffer is also multisampled, and the coverage is modified by the depth test at each of the sample points within a pixel.

Normally, the coverage comes from the shape of a triangle within the pixel area. The samples that fall within that pixel area are the samples that get written to.

Alpha-to-coverage is a way to modify the coverage for any fragments. It sets the coverage based on the alpha value of the fragment. Therefore, if the alpha is large, most of the samples are covered. If the alpha is zero, none of the samples are covered, and the fragment is effectively discarded.

That last part is why it’s order-independent. Just like the alpha test.

The part that would concern me is the fact that, while alpha-to-coverage does cause the coverage to be determined by the alpha, coverage isn’t a single number. The coverage mask represents the samples that are covered by a fragment. It is a mask, a bitfield. Coverage doesn’t just say that 3 of 8 samples are covered; it says which 3 of the 8 samples are covered.

Alpha-to-coverage doesn’t give you the power to specify that. To do that, you need the ability to write the coverage mask from the shader. And that is (I think) a 4.x hardware feature.

Alpha-to-coverage therefore allows implementations to decide which particular samples are covered.

Thank you very much for your explanation.

Let me ask something more to see if I understood this correctly:

Ignoring, for now, your last comment about the hardware choosing which samples are covered and assuming I had control and could switch at will the samples that my fragments output to depending on their alpha, does this mean that Alpha to coverage would technicaly give me the ability to do up to 8 layers of order independant alpha blending (or whichever my MS number is)?

Now, I understand alpha to coverage is not alpha blending ofcourse, it’s just alpha testing like you said, but it could be used on a final pass to read specific samples and then “manualy” blend them on a shader, yes?

Oh, by the way, the google search “how does alpha to coverage work?” now has this thread as second hit. Darn google goes fast.

Thanks again!

Ignoring, for now, your last comment about the hardware choosing which samples are covered and assuming I had control and could switch at will the samples that my fragments output to depending on their alpha, does this mean that Alpha to coverage would technicaly give me the ability to do up to 8 layers of order independant alpha blending (or whichever my MS number is)?

No, because alpha-to-coverage takes the alpha value and converts it to coverage. So even if you could pick which sample(s) it used, your fragment’s alpha value would still be wrong. The alpha value you choose is a coverage value, not really the transparency of the pixel.

In order to use a coverage mask as OIT, you need to write to distinct samples in the multisample buffers. So your coverage mask only has one bit. But you need the alpha value to be the actual transparency of the fragment. You wouldn’t be able to use alpha-to-coverage.

Also, your coverage mask will be modified by the area covered by the triangle. So if that sample happens to not be within your triangle’s area, it doesn’t get written.

Thank you. After reading your explanation I was able to understand better other sources I had found before. Clearly my problem was on an incomplete understanding of multi sampling (and even simple alpha testing!).

I suposse I’ll add to the menu multisampling experiments as its uses clearly go beyond just anti aliasing.