The software I’m working on does a lot of image compositing, some of it using Photoshop-like image blend modes that both read and write the destination (read-modify-write). In the current OpenGL renderer, this is done by first copying the destination texture before using it as a framebuffer attachment and sampling the copy. This obviously isn’t great but short of using extensions that allow reading the previous value of the destination pixel (and said extensions seem reliant on tiled renderers on mobile platform), it still seems like the only way to achieve this.
We’ve been planning to migrate to a Vulkan architecture, which I thought would give better control over this, but I may have been operating under false premises. With the level of control over subresources allowed by Vulkan, I thought it would be possible to use part of an image for writing while a different one is being read to (i.e.: copied). But no commands appear to allow copying an image region while inside a renderpass, and using a shader to do so would require beginning a new renderpass anyway to define load/store barriers. The documentation for VkFramebufferCreateInfo as well as discussion on Vulkan-Docs issue #299 seem to confirm this is impossible.
But because this is strictly reading and writing the same fragment, it seems like it might be doable using attachments and a subpass that simply forwards the input attachment so its value can be read while being used as output attachment? If anything, Alfonse_Reinheart’s reply to question #7035 in this forum seems to imply that it might even be doable within the same draw call?
I don’t have code examples yet, let alone an actual Vulkan backend, but clearing this up would strongly inform the architecture work I’m putting together right now.
(Apolgies for just referencing instead of linking–as a newly registered user, it appears I’m not able to post links.)