Depth peeling

For the project GlowScript ( I’ve written the code to handle transparency using depth peeling but alas it doesn’t work. I think I’ve narrowed the problem down to the following test cases, and I could sure use some advice/insight.


I render the opaque objects to a texture I call D0, with false colors representing depths.

Next I render the opaque objects to a texture I call C0, with real colors determined by object color, lighting, and textures and bumpmaps if any.

If I render to a quad applying the texture D0, I see (as I should) bands of color representing constant depths on the opaque objects, which shows that I’ve created a texture that can serve as a depth buffer.

If I render to a quad applying the texture C0, I see correctly lit and textured opaque objects.


After creating D0, I render to C0 but this time I color the fragment like this, where the canvas is 512x512:

gl_FragColor = vec4( texture2D(D0, gl_FragCoord.xy/512.0).xyz, 1.0 );

Alas, when I render to a quad applying the texture C0, it is evident that I’ve set gl_FragColor always to (0.0, 0.0, 0.0, 1.0). All of my tests are consistent with the summary statement that I can apply D0 to a quad but I can’t read information from D0 in the fragment shader, which I need to be able to do for the depth peeling algorithm.

I’ve verified by other tests that gl_FragCoord.x and y range over the interval 0 to 511, so I assume I need to divide by 512 to index into the texture, which I understand to be accessed by variables ranging from 0 to 1; since that’s the case in applying a texture to a triangle.

Any ideas? Of course I may have simply made some simple programming error, but do you see something fundamentally wrong with my test cases?

If this worked, presumably the rest of the already completed algorithm would work. The next step is to render to a texture C1 using the depth information in D0 to discard any points farther away than the opaque objects. The step after that is to render transparent objects to a texture D1 with false colors representing depths, discarding fragments that don’t lie between the opaque objects and C1, and use D0 and D1 to generate C2. Etc. I don’t see any efficient stop criterion, so my plan was simply to go as far as C4 (4 transparent layers) and piously hope that this would be good enough.

I guess just writing down my problem was sufficient. I found a simple programming error (wrong variable used in a for loop). Now I can really work on the depth peeling algorithm.

I’ve made an experimental implementation of depth peeling in GlowScript ( to handle transparency. This is the third GPU project I’ve attempted (pick, a success; extent, a failure, depth peeling, well…)

There are visual artifacts, and I’m not sure what the cause is. It seems possible that the problem could be related to the low accuracy of gl_Frag_Coord.z.

It takes FOREVER to create all the 8 textures needed for depth peeling – it adds another 10 seconds to the startup time, and it means there’s a 10 second delay to create a second canvas. Seems to be independent of the size of the textures. Puzzling. Does anyone have an explanation for why these operations are so slow?

Once that’s done, performance seems surprisingly good, at least in the situations I’ve played with where there aren’t very many transparent objects.

I build 5 color textures, one for opaque objects and 4 for transparent objects (plus there are 3 depth textures).

I’m not quite sure what to say about this depth peeling development. After a lot of work spread over a long time I got things functioning only this morning, so maybe I’ll find a solution to the visual artifacts. However, the very slow startup is a real bummer.

You can see this here: … ansparency

I was wrong. It’s not the texture creation that’s taking 10 seconds (which didn’t make sense). It’s the creation of the shader programs. They’ve gotten larger and more complex. Perhaps I need to restructure into more shader programs, each one more special-purpose? Like maybe a different fragment shader for every pass of the depth peeling algorithm? Or maybe make compiled binaries (about which I know nothing)?


“Depth peeling” is an interesting algorithm for handling transparency. The problem it solves is this: Suppose you have two intersecting objects, both transparent. A simple scheme like that used in VPython ( is to sort the transparent object by depth into the screen, by their centers, and the ones at the front shadow the ones next further back, etc. This doesn’t work properly if the two objects intersect because part of the object whose center is further back may actually be in front of part of the object whose center is further forward.

What you really need to do is sort by pixel not by object. In depth peeling you first get a picture of all the opaque objects. Then you get a map of the depths of all the pixels in the opaque layer and you then make a picture of all the transparent objects, discarding any pixels that are behind an opaque pixel. In the next pass you make a picture of the next most forward layer of transparent pixels, by discarding any contributions whose depth does not lie between the opaque objects and the front-most layer of transparent objects. Etc.

This is feasible with GPUs that do all this work in parallel. While one GPU is working on a pixel in one object, another GPU is working on a pixel in a different object, and there are hundreds of GPUs on a modern graphics card. Simultaneous stores of colors into the same pixel location (corresponding to different objects at different depths) are resolved by hardware that breaks the tie by storing that pixel that corresponds to the front-most depth.