Our fake transparency provide acceptable results with a small number of objects
but with thousands of objects it suffers by this problem:
Is there any BlendFunc that helps to mitigate this problem? We currently use:
If not, what other techniques can we try?
Weird. Globally, how do you manage your fake transparency ? I believe you are exceeding some limits (ie counter) but with so few information I doubt that people could give you pertinent hints.
It’s just standard overlay blending with a solid colour. The amount of background which survives scales as (1-α)^n. If n is large enough, the background gets lost in the rounding and increasing n won’t change anything.
The only things I can think of are to add wireframe (in a different colour to the fill). Or change the hue with depth. Either way, the stuff at the back will get lost but you should still get some foreground detail. Essentially, you have to make each polygon somehow different to the background it’s being blended with. Actual wireframe (with lines) is problematic because the lines need to be sorted with the polygons. You can implement a wireframe effect using shaders to highlight the pixels at the edges of polygons.
Edit: In any case, you’ll have to depth-sort the polygons, which presumably you aren’t doing at the moment. (α,1-α) blending favours the polygons which are drawn last. If all polygons have the same solid colour, the order doesn’t matter. But as soon as you vary the colour, that no longer applies. (α,1) blending is order-independent.
We simply draw each object with an alpha value of 1% with the BlendFunc reported above. Each object is drawn twice, back faces then front faces.
The problem is that the transparency accumulates too much in some areas and we get the 100% of opacity…
What if the farther objects are bigger than closer? Won’t you miss something?
Sorting on the fly is prohibitive with so many objects, so this problem has no solution?
If you can’t sort, the transparency needs to be order-independent, which limits your options.
One possibility is to vary the opacity with depth, so that closer surfaces have greater effect. But I suspect that you’ll need to vary the factors based upon layer density (i.e. the approximate number of surfaces overlapping a fragment).
Sorting wouldn’t necessarily need to be exact; you could group polygons into layers according to approximate depth then render the layers in depth order.
What about using an alpha value that decreases depending on the number of objects? With a few objects, I could use 0.01, with many 0.00001. Otherwise, I would need a more conservative BlendFunc. One that does not exceed 50% of transparency…
That would avoid the black blob. But you’ll probably just get a grey blob instead. The lower the alpha, the less effect each individual polygon has upon the result. Also, with 8 bits per component (RGBA8), you can have either ~0.004 (1/256) alpha or zero alpha, with nothing in between.
Ultimately, you’re going to have to discard some of the detail one way or another.
What BlendFunc shall I use to avoid alpha valued to be summed? I mean in place of this one?
Willing to test fixed alpha with edges.
Summation is implied by the default blend equation. The colour written to the framebuffer is
Cdst = Csrc*ksrc + Cdst*kdst
Where Csrc, Cdst are the source and destination colours and ksrc, kdst are the source and destination blending factors (α and 1-α in this case).
You can specify different blending factors for colour (R,G,B) and alpha using
glBlendFuncSeparate (requires OpenGL 2.0 or later). You can specify an alternative to summation using
glBlendEquationSeparate (also OpenGL 2.0 ); the options are add, subtract, reverse subtract, minimum and maximum.
Ultimately, if you have a large number of polygons overlapping a given fragment, and all of those polygons have an equal effect upon the result, you aren’t going to be able to “see” any detail. But in order to have an unequal effect, you have to decide upon the effect for each polygon. Normally this is done by sorting, so that nearer polygons have a greater effect than farther polygons. But that’s going to be costly if the number of polygons is large.
You might be better off just drawing wireframe. Lines typically cover fewer pixels than polygons so the amount of overdraw (i.e. the number of times a given pixel is drawn) will be lower.
Thanks again, we will try also different approaches.