Convert _UNORM to _UINT on the GPU?

I’m trying to add a simple screenshot feature to my Vulkan app. I was hoping to just take the latest image from from the swapchain, copy it into a CPU-accessible buffer, and then read it into an image processing library. And it kind of works, except that my Swapchain is something like B8G8R8A8_UNORM, while the image processing library expects something like B8G8R8A8_UINT.

Is there a way to do the conversion on the GPU?

  • vkCmdCopyImage does allow copying between these two formats, but as far as I can tell and read from the spec, it just does a dumb bit copy
  • vkCmdBlitImage can do conversions in general, but not between _UINT and anything else.

Am I missing something else? Is there no easy way to do this?

(note: performance is not a concern – this screenshot feature is used for testing/debugging so it doesn’t matter if it’s not the most efficient thing in the world, but I would like it to grab the actual Swapchain image as that’s the Real Thing, what the user actually would see)

What do you perceive is the difference between _UNORM and _UINT format that you feel you need to convert it rather than just type cast it?

I… huh. Good question! I guess I’m pretty confused.

So UNORM is supposed to be 0.0 to 1.0, thought I can’t find the spec about how it’s represented in memory. Basically I assumed that UINT would be represented as a normal 8-bit unsigned int (which is what I needed) but now that I look at the data, it seems that UNORM is represented exactly the same way. Is that guaranteed?

The Compatible Formats page does list them as compatible, but then again it also lists things like VK_FORMAT_A2B10G10R10_UNORM_PACK32 in the same list, so “compatible” doesn’t seem to mean “will represent the same image after copy”. So I guess I just wanted some assurance that the _UNORM to _UINT copy would represent the same image :slight_smile:

Which makes me wonder, if _UNORM/_SNORM/_UINT are identical in memory what’s the point of them? Is it just to change what kind of values you get in the shader?


PS: Point of static types is not identity in memory (representation), but identity in interpretation. bool and int are also identical in memory. _UNORM is interpreted as a fixed‑point float, while _UINT is interpreted as an unsigned integer.

You say that as though that were unimportant.

The normalized formats are functionally floating-point formats. That is, they resolve in shaders to floating-point values. They can also undergo sampling operations as well as framebuffer blending, neither of which are supported by integer formats.

This also affects how writes to them work. An image store or framebuffer write operation to a normalized image takes floating-point values, clamps them to the normalized range, and then converts them to the appropriate normalized values. None of this happens for writes to integer formats.

I don’t see a big problem, in fact, i did something pretty similar. I still use the B8G8R8A8_UNORM as the format for everything, and when writing i use the data pointer of the blitted/copied image and cast it to unsigned char or uint8_t ptr or std::vector, and then breakdown that to rows and pixels RGB/RGBA (depends on the format you wanna write to disk & the lib u use to write that image format), and write them.

Thanks, everyone, this really helps!