Will the values “look” different in the GPU memory
Who cares what it will look like in GPU memory? It’s not like you can peek at GPU memory and know what’s there.
What matters is what you get when you call
texture in your shader. Also, what matters is that you must pair GL_R8UI with a
usampler2D or whatever sampler type you’re using. That
u is important; without it, your shader won’t work.
When you fetch from a
u-sampler, you will get a
uvec4 back: an unsigned integer vec4. When you sample from a floating-point sampler (no prefix), you get a
Does it make any difference for precision which of those formats I choose?
When you fetch from a GL_R8 texture, you get a floating-point value on the range [0, 1]. When you fetch from a GL_R8UI, you get an integer value on the range [0, 255].
“Precision” isn’t really something that enters into it. You use the format that most correctly matches what you are doing (unless you need filtering, in which case you use the normalized one).
Both are required formats, so OpenGL isn’t allowed to store them in smaller than 8-bits. Precision is irrelevant.
And aside from the possibility to read uint instead of float texture values in a shader
Possibility? More like certainty. You’re not allowed to mismatch these things. Either you use a
u-sampler and have a UI texture format, or you use a float sampler and have a float/normalized integer texture format. You don’t get to pick and choose at runtime.
is there any benefit of choosing the GL_R8UI format over GL_R8?
This is not a complex decision you need to make here. The set of choices is as follows:
1: Do I need filtering? If yes, then GL_R8.
2: Does the data I’m storing consist of unsigned integers on the range [0, 255]? If yes, then GL_R8UI. If no, then GL_R8.
I don’t understand why you’re having trouble picking one or the other.