Understanding the pipeline: WHEN is 'depth comparison' happening?

Hey,

I am trying to understand the basics of the pipeline, and can’t figure out this:

At some point we can erase Fragments that would get the same x,y coords as the nearest Fragment to the camera with the same x,y coords

  • Is this the so called Depth Test as described here? Depth Test - OpenGL Wiki

  • And if so, why don’t we do it always before the Fragment Shader? this way we could save resources, as the Fragment Shader has not to process those ‘double’ Fragments, which will be thrown away anyway…

I THINK I GOT IT - as I am writingb :slight_smile: –> is this because of possible transparencies? So this would mean that transparency is computed only in the Fragment Shader stage?

I think this is it, but I post it anyway… maybe it helps someone, or maybe I am totally wrong… :wink:

Greetings,
Rico

Have a look at the OpenGL pipeline here. Depth test happens very late. As you can see, stencil and other operations happen before. Stencil test could do different operations depending if the z-test succeeded or not. So, doing the depth test before could pervade the stencil test for example.

Under some conditions you can benefit of early z-test. See this for example.

The OpenGL pipeline does depth tests before the fragment shader, when possible (and has for many years).

In this OpenGL 4.4 pipeline diagram (from the OpenGL Insights site), this is labeled “early depth test” (see the middle-right panel named “OpenGL 4.4 Fragment Processing”). Originally this was just done under-the-covers for efficiency. But some of this behavior is now exposed in the OpenGL interface.

Under-the-covers, there can be multiple stages of early Z (e.g. a course-grained hierarchical Z test, and a fine-grained per-sample Z test).

Also note that stencil test is often performed early as well, when possible.

I THINK I GOT IT - as I am writingb :slight_smile: –> is this because of possible transparencies? So this would mean that transparency is computed only in the Fragment Shader stage?

You’re on the right track. Transparency itself isn’t a problem. It’s when you allow the fragment shader to do something that may change the depth value that ends up getting written into the depth buffer. So for instance, if you’re using discard (or similarly rendering transparencies with GL_ALPHA_TEST), then that should prevent early Z tests. Similarly, changing the fragment depth in the fragment shader should also prevent early Z.

Just to complete the story.

Earlier GL versions only specified depth test to happen after per-fragment operations. There are a number of reasons for this - one of them being that writing depth is a per-ragment operation (for newer versions you’d say that the fragment shader is allowed to write depth), and so you can’t test depth until you know what has been written for depth.

However, aroundabout 2001/2002/2003, those clever hardware Johnnies figured out that if the fragment shader doesn’t write depth, and if you can test before the fragment shader runs and still have the same result as if it it had been tested after, you could still meet the requirements of the GL specification and get a nice performance optimization.

There were a whole list of conditions that could break this setup; writing depth was one, reversing the direction of the depth test mid-frame was another, operating on depth fail was a third, tricksy schemes to avoid clearing the depth buffer was a fourth, and there were probably more, but so long as you were either careful or clever enough, you could do it.

This happened automatically. You didn’t need to enable/disable anything, just set up your states to meet the driver’s requirements and the driver would do it.

Since then this kind of early depth testing has been embraced by the specifications which now give you more control over it’s operation.

Thanks to all your links and descriptions, that has helped a lot!!!

:slight_smile:

here again, I don’t get it 100%: how can the fragment shader, which comes AFTER possible early Z-tests prevent those?

Or is there a kind of test run for the first pixel or something like that, so that the whole pipeline is shortly ‘tested’ or ‘checked’ and then ‘certified’ to go?

Cheers!

By preventing early depth tests from being used. Transparent (i.e. “under the hood”) early depth optimisations can only be used when the implementation can determine that it is safe to do so. Later versions allow the shader code to control the use of early depth (and stencil) tests via the [var]early_fragment_tests[/var] layout qualifier, so that they can be enabled even in situations where it has an observable effect.

No, the tests are performed statically. E.g. if a fragment shader statically assigns to gl_FragDepth (i.e. there’s an assignment statement with gl_FragDepth on the left hand side, even if it’s within a conditional will never actually executes), then early tests won’t be enabled as a transparent optimisation. Similarly for image stores, atomic operations, etc, which are supposed to be executed even for fragments which fail the depth or stencil tests. And the use of [var]discard[/var] statements will prevent the depth value from being written prior to execution of the fragment shader, although the implementation can still skip the fragment shader entirely (along with writing the depth and stencil value) if an early fragment test fails (provided that stencil indices aren’t modified if the depth test fails).

However, early fragment tests can still be enabled explicitly by the early_fragment_tests qualifier. Doing so moves the burden of maintaining correctness from the implementation to the application. Also, if the fragment shader applies one of the depth_{greater,less,unchanged} qualifiers to gl_FragDepth, transparent early depth tests may still be used even if the shader statically assigns to gl_FragDepth.

This pipeline might deserve to settle in the official OpenGL wiki, as for your description which look far more up to date than the official documentation (and thus would help old-pipeline-minded people like me to remain up to date).

To my opinion.

This looks very interesting. Is this commonly used ? I must admit it’s the first time I’m hearing about that.

id Software did it back in the Quake days. It involves splitting the depth depth buffer in two, using depth range 0…0.5 and 1…0.5 on alternate frames, also reversing the sense of the depth test on the same alternate frames. Code.

This mostly affects hierarchical-Z schemes. Because the depth buffer is never cleared, it never gets reset to it’s “compressed” state, and so the hardware can’t take advantage.

[QUOTE=mhagain;1285362]id Software did it back in the Quake days. It involves splitting the depth depth buffer in two, using depth range 0…0.5 and 1…0.5 on alternate frames, also reversing the sense of the depth test on the same alternate frames. Code.

This mostly affects hierarchical-Z schemes. Because the depth buffer is never cleared, it never gets reset to it’s “compressed” state, and so the hardware can’t take advantage.[/QUOTE]

Thank you.

that helped! Sounds like I should do less theory and more programming :wink:

thanks to you and all the others for helping me out!!