My existing renderer targets OpenGL 3.3 with strictly no extensions. I’m planning to write a new version of the renderer but targeting Vulkan instead.
In the existing renderer, I use logarithmic depth everywhere. More specifically, each shader writes an encoded form of the negated eye-space Z value
gl_FragDepth and then I reconstruct the positions of fragments by using these encoded depth values. I don’t use a separate Z buffer; the logarithmic
depth values go straight into the depth buffer  .
The two issues with this:
- Every shader has to explicitly write to
gl_FragDepth; you can’t just depend on the depth values calculated automatically by the vertex shader.
- It disables early Z optimizations, leading to overdraw.
In the original article by Brano Kemen , he describes several workarounds
for this (such as glDepthRangeNV). Unfortunately, all of them require the use of extensions that may not be supported everywhere. He describes most of the
issues as being essentially OpenGL restrictions (the majority of which are not present in the DirectX implementations on the same hardware).
So… My question is: What are my options for doing logarithmic depth buffers efficiently on Vulkan without extensions?
I’m fine with having to explicitly write
gl_FragDepth everywhere, but I’d really rather that this didn’t entail a performance cost. Is there a conservative
depth equivalent on Vulkan that actually improves performance?
Sorry... New user; forum won't let me post URLs.  io7m.github.io/r2/documentation/p2s24.xhtml  io7m.github.io/r2/documentation/p2s21.xhtml  outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html