True transparency

Hi All,

We have always believed that true transparency was an hard thing to code. Yesterdeay a coworker suggested the following trick:

Actually, in OpenGL rendering both opaque (alpha=1) and transparent
(alpha<1) objects in the same scene is quite easy: First you draw all the
opaque objects, with depth buffer in normal operation. Then you toggle the
depth buffer read-only (by glDepthMask(GL_FALSE)), draw the transparent
objects and make the depth buffer writable again afterwards
(glDepthMask(GL_TRUE)). There is should be no need for sorting objects by
their viewpoint distances.

What do you think?



This “trick” is very old and used by a lot of people.
But it does not solve the order problems that can happen between translucent faces (between 2 convex objects or within 1 concave object).
That is where you need sorting. Or use order independent transparency, such as additive blending or darken etc.

Hi ZbuffeR,

As always, thanks for your help.

Can you give me more info (or point me to some resources) about additive blending, darken, etc.

Our customers are striving for a more realistic transparency.

Thanks so much again,



I want to be more clear, suppose to have a more complex scenario like the following:

Is it possible that the only solution is to skip objects’ display lists and to sort triangles at each frame before drawing them? It looks so slow…

Can you please list all the OpenGL transparency approaches sorted from easy to complex to implement?

We need something more realistic but we are scared from the complexity that always turns out.

Thanks again,


Transparency is a bitch.

To get it working, you can sort all triangles from back to front (for example using a BSP-Tree). That is computationally expensive and not fun to implement. I can’t advise you to do so, though, with complex scenes this won’t run fast enough.

Another solution is to to use “depth-peeling”. However that method requires modern hardware and will still be quite slow, but at least you don’t need to sort anything. However, from your other posts i suspect, that you need to support older hardware, so that’s not really an option, either.

A third solution, is to only sort by “object” (e.g. group of triangles) and render them roughly from back to front. That’s an approach taken very often, because it works well in many situations and isn’t difficult to implement (usually). But there will always be situations where it doesn’t work properly.

Looking at your screenshot, i find that your results look quite nice already. If you are not doing it already, you could add sorting by object, and leave it at that.

There are not really many options, when it comes to transparency. All completely correct solutions are complex and slow.


You still need to depth sort the transparent objects.

As has been said it’s an old trick, however it is flawed because transparent objects can occlude other transparent objects and can self-occlude.

You have a nice solution BUT transparency blend results are not order invariant.

If you have two 75% transparent objects and one occludes the other or vice versa the result will be highly dependent on drawing order. For it to be correct it needs to be sorted back to front.

Transparency occluding opaque objects in a zbuffered scene has never been a problem except for inexperienced graphics developers.

Sorting isn’t a huge problem either although it can increase state changes etc, however the real problem is that object level sorting is not always sufficient to get it correct.

Picture two transparent interlocking tori (two chain links).

Which order do you draw them in as objects to get the correct result? There is no correct order in this scenario.

A simple object level sort is sufficient for most applications. Rather than a depth sort, try to insert a plane between them and decide which side of the plane the eye is on. Use this mechanism to sort for best object level results.

Yes. Although some graphics hardware with deferred shading internals can handle it automatically (not avilable on mainstream platforms). At the application level there’s a need to compromise for good performance.

Alpha-to-Coverage with MSAA can do the trick.

Hi Guys,

Thanks a lot for your thoughts, just a little clarification on the image: this was taken on the internet, it’s from SolidWorks CAD application, it doesn’t belong to us…

We already implemented object based sorting (using distances from centroid) but simply putting a semi-transparent box inside another semi-transparent box - with the same centroid - will give an incorrect result.

Can we say that all modern 3D applications are using “depth-peeling” and require high end hardware? I mean 3D Studio Max, LightWave3D, Maya, etc.

Where can we find very clear instructions on “depth-peeling”?

Thanks again,


depth-peeling can be found in nVidia SDK, GPU Gems and ShaderX series.

Does this mean that on ATI hardwares “depth peeling” does not work?

What about this additional trick to my original post:

generally this is true, but one uses the GL_FUNC_ADD blending function and the GL_SRC_COLOR and GL_DST_COLOR blending factors (which I guess you do or which I would use) the order of two translucent faces does not matter, because one just gets an addition of colors



That came out of nowhere. Why would you say that?

Does ATI support depth textures? Should work then.

As for how this works, google for details but it’s pretty simple. Conceptually you “peel away” one layer of transparency at a time, nearest to the eye first.

In more detail: You render your all alpha objs repeatedly in N passes, for N layers of transparency (i=1…N). Into pass 1 you pass a depth texture of all zero. Into pass i, you pass the depth buffer for pass i-1 as the depth texture. In each pass, you do a fragment shader depth texture lookup, and only render the fragment that’s nearest to the eye (normal Z buffering) BUT ALSO further from the eye than the last depth texture value (call frag shader discard if not).

So as you can see, pixel by pixel, we peel away one layer of transparency. When done, blend all those resulting color buffers back-to-front.

Not the cheapest thing in the world, but if you can bound how many levels of transparency you’ll have that are important, it works. I’m sure there are other variations. Check whitepapers for details. Google dual depth peeling for how you can generate two passes in one pass.

Thanks Dark Photon,

Am I asking to much for a working C++ sample? Do you know if there is one somewhere?

One last quick question:
Can we consider that this feature is available only if the hardware supports depth textures?



It is not a feature that you can enable, you have to implement it yourself! And to implement it, you need depth-textures and fragment shaders.

Here is a good source about depth-peeling.

BTW, that website knows a lot of other stuff, too.


That’s a little strong. You could do it with hardware that supported floating-point or even fixed-point textures (into which you’ve encoded floating point values – e.g. stripe a float across an RGBA8). You just need some way to get floating point depth values from the previous depth buffer into the shader, to an accuracy sufficient for your needs.

If you take a look to this code (the most serious found on the internet so far) you can see how hard it is this thing to code…

What the bigs doing for nice transparency? I mean Autodesk, SolidWorks, etc?



Either “Sort by object” or “Sort by polygon” . You can quickly construct a test case to see the artifacts: two intersecting triangles, of different color and opacity.


How can you sort by object a sphere inside the other? They share the same centroid and the same distance from the viewer…



You can’t. With sort-by-polygon you can.
It’s all a choice for efficiency vs correctness. With sort-by-object, you transform and sort few objects. With sort-by-polygon you transform, average-z sort and stream-upload geometry of many many poly (quite a performance hit).

With depth-peeling you effectively disable ALL acceleration logic from the gpu and avoid-override the established pipeline.


Please be more clear.

With sort-by-polygon you transform, average-z sort and stream-upload geometry of many many poly

Suppose you have a complex object (like a human body) made of 1,000 triangles in a display list. Now if you want to display it correctly with a transparent material you need to skip the display list and sort each triangle from farther to closer (computing the distance from the eye to the triangle centroid) at each frame (during rotation for example) or is there a more straightforward way to do it?