Why don't people use Compute Shaders?

Hi everyone,

I was wondering why it’s so uncommon to see (mainstream) applications use compute shaders. Probably it’s related to the very limited number of applications I had the patience to actually run with a graphics debugger, but I would have expected to see at least some of them use the compute stage.

Recently I was looking at how Blender computes entities outlines and it’s done with a pixel shader, Unity does the same (form bloom for example). I also noticed the same pattern with other application (mostly CAD).

Is it just an impression or it’s true developers are “avoiding” compute shaders?

Is it a matter of compatibility (aren’t they available on D3D11 compatible hardware)? Aren’t there enough advantages to make developers convert the old “draw a full-screen quad” code into compute shaders?

Why do you believe that it’s uncommon?

Both of those are graphics tasks. The end product of the process is a visual effect relative to the screen. Why would you want to use a CS for that? It can’t render to framebuffers. Well, you can use image load/store to write to those images, but there’s no advantage to doing that unless you’re writing in a random-access way.

Compute shaders in graphics APIs are typically used for pre-rendering tasks. Things like doing frustum culling through the creation of indirect rendering commands or managing the position of particles. But if the effect is directly connected to the framebuffer, it is often more useful to just use that.

Thanks for the answer!

However I still don’t understand everything you said:

Both of those are graphics tasks. The end product of the process is a visual effect relative to the screen

I know, but in the specific example (Blender and outlines) the result is written on a texture and stored for later use.
.

But if the effect is directly connected to the framebuffer, it is often more useful to just use that.

So going through the usual stages (vertex transformation, rasterization, depth/stencil testing, blending, …) when we just want to perform some kind of filtering/image processing is neglectable performance-wise? And what about the explicit compute shader dispatching mechanism, the possibility to use the group shared memory and synchronize threads? Aren’t those feature potentially beneficial also for visual effects relative to the screen?

I fail to see why that should matter.

Compute shaders have two useful qualities you cannot get from the regular rendering pipeline: the ability to specify exact dimensions directly, and workgroup shader cross-talk.

By contrast, the regular rendering pipeline has access to a bunch of facilities that CS’s can’t touch. It’s pipelined, so you can take advantage of interpolation of VS products when accessed by the fragment shader. This makes it easy to have a division of labor. You get access to blending and other post-FS processing steps.

Would these algorithms benefit from the advantages of CS? They don’t seem to involve much cross-talk, and specifying the domain of their processing isn’t a particularly big issue. So why make them compute shaders?

Also, Blender wants to support GL 3.x-class hardware.

That depends: is that image already part of a framebuffer? Because changing framebuffer state is really expensive.

1 Like

Thank you very much for the clarification.

I think I made the naive assumption “not drawing primitives => compute shader”. So the answer to the question is “given the nature of the algorithm (like outline detection) there’s no real benefit in converting from pixel to compute”?

By contrast, the regular rendering pipeline has access to a bunch of facilities that CS’s can’t touch. It’s pipelined, so you can take advantage of interpolation of VS products when accessed by the fragment shader. This makes it easy to have a division of labor. You get access to blending and other post-FS processing steps.

But are those facilities relevant for filtering/image processing tasks (like outlines detection, blur, bloom, …)? As there are no reasons to use compute for those tasks are there reasons NOT to use them if you were to write an engine from scratch (if only for the convenience of not having to worry about the state of pipeline stages you are not interested in when blurring an image for example)?