I fail to see why that should matter.
Compute shaders have two useful qualities you cannot get from the regular rendering pipeline: the ability to specify exact dimensions directly, and workgroup shader cross-talk.
By contrast, the regular rendering pipeline has access to a bunch of facilities that CS’s can’t touch. It’s pipelined, so you can take advantage of interpolation of VS products when accessed by the fragment shader. This makes it easy to have a division of labor. You get access to blending and other post-FS processing steps.
Would these algorithms benefit from the advantages of CS? They don’t seem to involve much cross-talk, and specifying the domain of their processing isn’t a particularly big issue. So why make them compute shaders?
Also, Blender wants to support GL 3.x-class hardware.
That depends: is that image already part of a framebuffer? Because changing framebuffer state is really expensive.