As everyone knows, transparent objects need to be drawn last, sorted for depth, front to back. My problem, is that when emitting a transparent batch, I don’t know, whether more batches might be emitted and hence I can’t sort while batches are still emitted. The sort must be done when all the other (non-transparent) batches are sorted. I have 2 solutions to this problem:
a) a separate list of transparent batches, with a separate sort made on it
b) pairs (link to transparent batch in batch queue, depth) are kept when the transparent batches are emitted, then before doing the main sort, these are sorted based on depth and their key is the batch queue is updated, based on the results of the sort.
I don’t really know if these are the only 2 options, nor which one is better. Maybe you can help me with advice, before I change half of my project?
I personally think option a) is best, since the sort on the non-transparent queue is then faster, since there are less objects in it and 2 sorts need to be made, no matter which option is chosen.
I would go for option a. You don’t really want to handle transparent objects in the same way as opaque objects if you plan to sort them per-object based on distance from camera.
If you can give us some information about the target hardware generation, maybe I can give you better options.
Also, the necessity of depth sorting also depends on what blending modes you plan to use. Of course, if you want full flexibility and allow any blending mode then you most probably have to do some sorting anyway.
He’s talking about the blending functions you use.
Let’s say you have the objects A, B, and C. And they are to be blended in that order. So in pseudocode, it is Blend(Blend(A, B), C).
If the “Blend” operation is linear interpolation (alpha blending), then Blend(Blend(A, B), C) != Blend(A, Blend(B, C)). The order is important because linear interpolation is not mathematically associative.
However, if the “Blend” operation is addition, then basic math tells us that (A + B) + C = A + (B + C). The order is not important.
Also, I am very interested what role destination alpha plays in this game.
There are various translucency techniques which do the sort differently or avoid the sort altogether, to varying impacts on performance and quality. Depth Peeling, ALPHA_TO_COVERAGE (MSAA), Deferred Rendering Transparency, Alpha Blending as a Post-Process, BSPs, etc. being some of these.
Why use destination alpha for transparency?
If you blend translucent objects front-to-back, you’d use destination alpha. For example, see this. For normal interpolative alpha (ALPHA,1-ALPHA, rendered back-to-front) you wouldn’t use destination alpha, just source alpha).
As everyone knows, transparent objects need to be drawn last, sorted for depth, front to back
Not necessarily. See previous two points. Traditional alpha blending sorts and renders back-to-front. But you can do front-to-back (or other orders) with an associative blend function.
My problem, is that when emitting a transparent batch, I don’t know, whether more batches might be emitted and hence I can’t sort while batches are still emitted.
Right. If you’re going to use a translucency algorithm that requires sorting pre-GPU-submission, then yes. You should bin them while culling them in from your world rep (e.g. scene graph), sort them, and then send them down the pipe. Not all require this though.
Further, while the standard linear interp blending (ALPHA, 1-ALPHA) blending is not associative, pre-multiplied alpha blending (1,1-ALPHA) “is” associative. And it is exactly this which allows you to do front-to-back blending if you want. Premultiplied alpha is pretty much necessary (or more desirable) in many circumstances, including if you have many layers of transparency, are MIPmaping alpha textures (or even using LINEAR interp on alpha textures), using compressed textures with alpha, etc.
Further, while the standard linear interp blending (ALPHA, 1-ALPHA) blending is not associative, pre-multiplied alpha blending (1,1-ALPHA) “is” associative.
Yes, pre-multiplied alpha allows you to reverse the sorting but does not allow you to avoid sorting as in order to be able to render translucent surfaces in random order, the blending operation must be also commutative. Unfortunately, AFAIK, this is true only for additive and multiplicative blending.
For some order independent transparency methods available on GL2.1 hardware, check out these slides:
It presents two methods: weighted average blending and dual depth peeling. The later is rather expensive so I suggest you to go with the former if it fits your needs. It is not the best approach as it diminishes the perception of depth, however it is pretty fast and allows you to avoid per-object sorting.
Thanks a lot to all, a great Christmas present for me. Yeah, and merry Christmas and a happy new year to you all, of course.
If I had managed to stumble on the articles on the web, that you reference, I would not have asked anything. If you were so kind, maybe you could publish your bookmarks somewhere for us noobs to look at. I think it would improve the quality of posts in the forums. And an even better thing would be to accrue bookmarks of you all in some place.
About the destination alpha: I wanted to know was if there was some interesting algorithm that uses it and if there was some benefit to using destination alpha for transparency. Apparently there is.
Interesting note about BSPs! I had almost forgotten you can sort with them. No distance key needed for sorting in this case. But still probably for batches that are moving (are not static geometry).
I have to admit, i haven’t read the whole thread. Anyway…
I use “transparency groups”, as AFAIK the Doom 3 engine also does. That means I assign each transparent surface a “layer value” (1, 2, 3, 4, …) in the editor.
For example all small objects and stuff that is close to a wall (like glass in a cupboard door) get a low value (ie. “1”). All windows get a higher value. Basically the more likely it is, that one can see another transparent surface through some polygon, the higher its value gets.
When rendering, I first render all opaque objects. Then I make a copy of the screen and render all transparent polygons with value 1, using the copy to do the blending manually in a shader. Copy again, layer 2. Copy again, layer 3. Etc. etc.
Of course you should try to use as few layers as possible.
The reason, that I’m using this technique is, that transparency alone looks boring. You’ll usually want to have it to bend the light (ie. refraction). But to do that, you will need to have a copy of the framebuffer! Also it allows you to implement more elaborate blending functions.
So either you have pure transparency that is sorted and works well in all scenarios, or you have only limited correctness, but can use refraction (and hell, it looks nice).