Strange mapping of float shader color values to uchar values in a read buffer


I knew that a range of float color value in a shader [0…1] is mapped into range of [0…255] in UCHAR buffer.

According to this, I was expecting for steps of size of 1/255 in shader color values for each change in UCHAR buffer.

But the results were surprisingly different. Here is for the first two steps:

Red float value in Shader -> UCHAR value in a read Buffer

0.000000 -> 0

0.002197 -> 0

0.002198 -> 1

0.006102 -> 1

0.006105 -> 2

The first two steps are around 0.002197 and 0.006102 which are different than the expected steps: 0.00392 and 0.00784.

So what is the mapping formula ?

The mapping formula is the same as it was when you asked this question on Stack Overflow. I know you don’t like that answer, but it is the right answer, and that fact won’t change just because you asked elsewhere.

The answer in question being that it is implementation dependent. Implementations must return one of the two nearest integers, but beyond that (and that 0.0 and 1.0 must return exactly 0 and MAX respectively), the Vulkan specification allows implementations to do as they see fit. The specification encourages proper rounding to the nearest integer, but there is no requirement of that.

The fact is, if you’re converting floats to bytes, your error tolerance for the values you get is 1/255. For any given float, the byte representation of it will be accurate to within plus/minus 1/255. That’s all the specification will guarantee (again, outside of exact values for 0.0 and 1.0). And because of implementation freedom, you cannot write portable code which expects something else. If your algorithm cannot tolerate an error bar of 1/255, then you should not convert your values to bytes.

If you need specific details for a specific implementation, you can ask for that (and you’ll have to provide details on which one you’re interested in), but be advised that it will be correct only for that specific implementation (and is subject to changing at any time. Yes, a mere driver update could potentially change how rounding works).

If you have an explicit need to have direct rounding, you can always do the normalization yourself. In fact, GLSL has specific functions to help you. The following assumes that you are trying to write to a texture with the Vulkan format R8G8B8A8_UNORM, and we’re assuming you’re writing to a storage image, not via outputs from the fragment shader (you can do that too, but you lose blending).

So, step 1 is to change your layout format to be r32ui. That is, you are now writing an unsigned 32-bit value, rather than 4 unsigned 8-bit normalized values. That’s perfectly valid.

Step 2 is to employ the packUNorm4x8 function. This function does float-to-integer normalization, but the specification explicitly performs rounding correctly. Use the return value of that function in your imageStore function, and you’re fine.

If you want to write to a fragment shader output, that’s a bit more complex. There, you will need to use a different image view, one that uses the R32_UINT format. So you’re creating a 32-bit unsigned integer view of a 4x8-bit normalized texture. That has to become a render target, so you’re going to have to do subpass surgery. From there, just write the result of packUNorm4x8.

Of course, you immediately lose blending and similar operations, since you’re writing integers values. And since you had to do that subpass surgery, it’s likely that any shader writing to it will need to do this too.

Also, note that in both cases, you will likely need to adjust the order of the components of the value you write. packUNorm4x8 is explicitly defined to be little endian, whereas (I believe?) R8G8B8A8 is specified to be in that order, most-significant to least. So you’ll probably need to essentially do endian swapping with packUNorm4x8(value.abgr).

Thanks @Alfonse_Reinheart.
And yes, I was very doubt about this issue, so wanted to hear more.