Depth shader

So GPUs have optimizations relating to depth testing : hierarchal Z.

In the fragment shader, if you change the depth or do a discard, those optimizations are disabled.

Why not have another programmable stage in there called the depth shader. In that shader, you just do whatever is related to your depth generation code or fragment discard.
Another benefit is that this simplifies your fragment shader a little bit.

I guess it should be called “clip shader” and be performed before the fragment shader? :slight_smile: But how would it maintain Z optimizations?

Actually, I think it would not help Hi-Z. Damn.
Ok, never mind.

If you write depth you can’t have those early depth/hierarchical Z optimizations as they assume depth changes linearly across the triangle in screen space. If a fragment shader or your proposed “depth shader” change the depth insted of taking the one generated the assumption does not hold any longer and thus the optimizations cannot be performed.

[ www.trenki.net | vector_math (3d math library) | software renderer ]

Originally posted by V-man:
Actually, I think it would not help Hi-Z. Damn.
Ok, never mind.

You are welcome :slight_smile:

Well, HiZ and EarlyZ doesn’t neccesarily have to operate on interpolated Z values. For Z-compression you naturally want linear data for it to be compressible, but for culling chunks of fragments this should not be neccesary. The HiZ implementation might be more complex, but for EarlyZ this should not be a problem (hardware guys may think differently, but I don’t see a problem myself).
With that said, I don’t think this is worth the effort. Modifying depth is not the most useful feature anyway. I think it would have been more useful to output a depth offset instead, so that it could have been more compatible with multisampling.