Glsl shader imageStore only works with rgba32f image format

I wanted to implement a Vulkan compute shader where some data is written to a VkImage via ‘imageStore’ that is afterwards copied to the Swapchain (and displayed).

Initially it would just show a black screen but after trying out different things I managed to make it work by setting the image format to ‘rgba32f’ in the shader.

However if I understand it correctly, the image format specified in the shader should be converted to the image format specfied by the binding function, at least according to the khronos wiki? (Image Load Store#Format conversion)

But even if that wasn’t the case, I don’t know why it only seems to work with ‘rgba32f’ since I’m using the ‘VK_FORMAT_B8G8R8A8_UNORM’ format when creating the VkImage, so I would only expect a format like ‘rgba8ui’ to work in the shader.

I can paste the whole shader code here if required, but in regards to the Vulkan code I’m not sure what parts would be useful (and the whole code is too much to paste here).

Sorry for late reply.

Firstly, UNORM is a float format. It is actually a float encoded as int. UINT and SINT are integer formats. Appropriate format qualifier for VK_FORMAT_B8G8R8A8_UNORM should be rgba8 (in OpenGL formats are by default UNORM; without suffix).

If shaderStorageImageReadWithoutFormat and shaderStorageImageWriteWithoutFormat is not enabled in Features, you must always specify a format in the shader.

PS: The format compatibility is in this table: Vulkan® 1.1.243 - A Specification. (Those are SPIR-V names, so they might differ slightly from GLSL, but I think only that the first letter is capitalized). This must match. Where conversion potentially happens is between the Image View format, and the g in gimage2D.

Ah yes you’re right, for some reason I thought UNORM formats consist of unsigned integer components, I guess I forgot that formats like VK_FORMAT_B8G8R8A8_UINT exist for those.

rgba8 indeed works, though to be honest I was too lazy to test with other formats than rgba32f and rgba8ui (the results of that testing would have probably led me to a similar conclusion).

Anyway, I also have a bonus question (not sure where to put it outside this thread, so here it goes):
In my program I’m doing a 5,5,5,1 to 8,8,8,8 bit color conversion, originally I did this on the CPU (and directly wrote it to the VkImage via vkMemoryMap), like this:

uint8_t r32 = ((float)r16 / 31.0f) * 255.0f;
uint8_t g32 = ((float)g16 / 31.0f) * 255.0f;
uint8_t b32 = ((float)b16 / 31.0f) * 255.0f;
uint8_t a32 = a16 * 255;
uint32_t color = (a32 << 24) | (r32 << 16) | (g32 << 8) | b32;

However the compute shader produces slightly different color values in the final image if I do it like this:

vec3 rgb = vec3(r16, g16, b16) / 31.0;
float alpha = float(color16 & 0x1);
vec4 color = vec4(rgb, alpha);

In order to get the same values from the original method I have to do this instead:

vec3 rgb = vec3(r16, g16, b16) * (255.0/31.0);
uint alpha = (color16 & 0x1) * 255;
vec4 color = vec4(uvec3(rgb), alpha)/255.0;

As you can see this is most likely due to the float value being cut off when casting the color values to integer.
However even with mathematical rounding on the CPU, the resulting color values don’t exactly match with those of the pure-float compute shader (I guess due to different rounding methods/precision?).
So now my question is, what’s the more accurate method of color conversion in this case?

You should look into these two posts for details, but the short version is that floating-point normalization uses implementation-defined rounding. So unless you do normalization manually, you cannot guarantee that a floating-point value of 0.5 will produce a normalized integer value of 1.

I mean, this is missing 255 at all, so I am not sure how it is supposed to work.

Validation layers should give error; but I guess it is not implemented if you got nothing. Using incompatible format qualifier is not valid usage, and therefore leads to undefined behavior.

Float-int conversion in C++ truncates, per spec.

If you value accuracy, you could do integer math. Though actual “colors” are not that important, humans can’t tell slight differences anyway.

As far as I understand it the shader operates on the image via the normalized format, hence the color value range there needs to be between 1 and 0 (and it certainly produces correct enough results).

Though with that in mind I’m not sure why the CPU mapping works with the 255 to 0 range, I guess the data representation is the same on a bit level? Probably also a reason why I initially thought that UNORM would map to rgba8ui.

I know, I was just referring to a test with rounding applied via roundf() (didn’t want to post redundant code).

Anyway, thank you very much @Alfonse_Reinheart and @krOoze for clearing things up for me.

That is possible. GLSL should be able to operate on both normalized and unnormalized values; depends how the input was declared by you…

What I mean is is that normalized value is the same-ish (±accuracy) for 8888 and 5551. For both it would be in range [0,1], right?

So if you divide range [0, 1] by 31 you get a [0, 1/31] range. That is the part that seems non-sensical to me. At the minimum I am not sure what are you trying to achieve there, and I see it obviously differs from the CPU code.

Ah, I probably wasn’t clear enough in that regard. The range of the input values is the same in both CPU and shader code ([0, 31]).

I just omitted the extraction code of the 5551 color components in both cases and shortly labeled them as r16,g16,b16 (since they were extracted from a 16 bit value).