Not sure how much it’s used nowadays. The question though is whether it’s the best solution for your situation given your unique requirements. And only you know those.
Off-the-cuff, you’d think so. But when you start to think “how would I do this” you realize it’s really cheap (**)
** = …that is, assuming you don’t have a bazillion tiny bits of memory per scene object in your scene graph that need added to the output draw list when that object culls into the frustum.
So currently, you presumably have a basic cull pass that runs on your scene, cull testing objects, and adding in-frustum objects to an output “draw list”.
How does having “near subfrustum” and a “far subfrustum” change this? It’s still only one cull pass over your scene, as before – nothing different there. But you simultaneously build 2 output draw lists instead of one, binning in-frustum objects into 0, 1, or 2 of these based on the cull test. Per object: if it’s out-of-frustum, same as before: discard it. If it’s in-frustum, then test the object’s bsphere against a plane separating the “near frustum” and the “far subfrustum” (call this the “mid-plane”). If it’s totally behind the plane, bin the object in the far subfrustum’s draw list. It it’s totally in front of the plane, bin it in the near subfrustum’s draw list. And if it straddles the plane, bin it in both.
In practice what you find is that 99.9% of the time, the object is totally behind that “mid-plane”, and so almost no scene objects get inserted into the near subfrustum’s draw list. So the only extra work per-object (and for the in-frustum objects only) is that sphere-plane test.
And if you know anything about bounding sphere culling math, you’ll immediately realize that this is super-simple and cheap: it’s one dot product and one float compare. It doesn’t get much cheaper than that.
Super-cheap as well. Draw the far subfrustum’s draw list. Then draw the near. You need a depth buffer clear between these. But that’s super-cheap as well (no, the GPU doesn’t go memset the pixels when you call
glClear( GL_DEPTH_BUFFER_BIT ); it most likely just sets a few flags that say the depth buffer has been cleared, and resets the hierarchical/early Z structures for fast depth testing going forward).
I’ll defer to other on-line blog posts discussing this. For instance, these mention it:
I’ve never implemented the Logarithmic Depth Buffer approach because it seems to create problems (some performance related) that do not arise from using other approaches such as a 32-bit Float Reversed Depth Buffer.
Honestly, if you’d like to just try one of these approaches to see if it helps, I’d definitely opt for the 32-bit float reversed depth buffer. It’s simple and doesn’t come with any ugly side-effects.
But before you do, you might do some analysis on a failure case to determine what depth buffer precision is required to avoid Z-fighting in that situation. This may very well preclude one or more of the above methods.