I have several millions of translucent points ordered by depth. Currently, I’m sorting them from front to back using a bitonic sort in a compute shader, rendering them properly with under-blending and depth testing disabled.
I’m trying to increase the performance, and I feel like a lot of fragment processing is done for nothing because a lot of fragments can be behind an almost opaque fragment (a pixel can be covered by thousands of points).
Is there a way to tell Vulkan to skip processing a fragment if the current alpha value in the framebuffer is above a threshold (let’s say 0.95) similar to what I can do with depth testing?
Have you measured that the fragment processing is in fact something where you spend a lot of time?
Offhand I can’t think of something where this would happen for you, i.e. before the fragment shader is even invoked.
If you bind your color target also as an input attachment you can read the color and do a
discard if the opacity is above the threshold. I don’t know how much that improves things; to do a quick test you could just modify your fragment shader to discard a large number of fragments (maybe based on depth range?) to get an idea of the potential improvements.
For an alternative way to render large point clouds (using a compute shader to do the rasterization), see this repo and papers linked from the readme file.