8bit integer color read and write

First of all, I just want to say that am very new to opengl programming…

I want to be able to write and read 8bit integer colors, but I can’t get it to work
I have a very simple scene with no lights, textures, etc… just objects in color… if that makes sense… but so far I can only get float color with this:

# Position
glEnableVertexAttribArray(0)    # position
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 24, ctypes.c_void_p(0))
# Color
glEnableVertexAttribArray(1)  # color
glVertexAttribPointer(1, 3,  GL_FLOAT, GL_FALSE, 24, ctypes.c_void_p(12))

and this:

layout (location=0) in vec3 vertexPos;
layout (location=1) in vec3 vertexColor;

uniform mat4 model;
uniform mat4 projection;

out vec3 fragmentColor;

void main()
    gl_Position = projection * model * vec4(vertexPos, 1.0);
    fragmentColor = vertexColor;

and then in the vertex shader I just add 1.0 to “vec4 fragmentColor”.

When I want to use 8bit color as this:

glVertexAttribPointer(1, 1,  GL_BYTE, GL_FALSE, 13, ctypes.c_void_p(12))
glVertexAttribIPointer(1, 1,  GL_BYTE, 13, ctypes.c_void_p(12))

it doesnt work, and I don’t understand why.
I assume that the stride should be just 13 because the GL_BYTE is just one byte more… is this correct?
If I use glVertexAttribIPointer() and not glVertexAttribPointer(), can I still use vec3 in the shader, assuming that then I use 8 bits for each component, or do I have to use int or uint in the shaders?
would then I have to do this:

glVertexAttribIPointer(1, 3,  GL_BYTE, 15, ctypes.c_void_p(12))

If I could write 8bit color, how can I read back the same 8bit color with glReadPixels() … is that even posible? or no matter what you use for color it will always be converted to float, and you wont be able to read it as it was?

I want to use this for picking/selecting objects. Is there maybe another way to store some data and read it back with glReadPixels(). It should be at least 8bits of data

Look at the function call; it says that you’re providing 1 component for the attribute. Does the shader consume a single component for attribute 1 or does it consume 3 components?

If I use glVertexAttribIPointer() and not glVertexAttribPointer(), can I still use vec3 in the shader

No. Integers aren’t floats. If you use IPointer, that means you want to pass an integer. That means the shader needs to use ivec3 or uvec3.

If you want to pass a normalized value (which is consumed in the shader as a floating-point value), you use Pointer.

Then you’re no longer talking about a “color”; you’re talking about arbitrary data. You explicitly want to store an integer (and I would suggest a 16 or 32-bit integer, since 256 is a little small for picking). In that case, you should be rendering to an image with an integer image format, the fragment shader output should have an integer type, and every step in the shader needs to deal with integers, including the VS input.

Thank you very much for your quick and informative respone

yes, I want to use single 8bit integer. So then, should I use 13 for stride ?Because it is just one byte, right?

Lets say I want to use unsigned int…:

glEnableVertexAttribArray(0)    # position
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 16, ctypes.c_void_p(0))

glEnableVertexAttribArray(1)  # color
glVertexAttribIPointer(1, 1, GL_UNSIGNED_INT, 16, ctypes.c_void_p(12))

I have three floats, each 4 bytes, which makes it 12 bytes… and one unsigned int value, that should be 4 bytes. All together 16bytes of stride… is that correct ?

in vertex shader I have:

layout (location=0) in vec3 vertexPos;
layout (location=1) in uint vertexColor;

uniform mat4 model;
uniform mat4 projection;

out uvec3 fragmentColor;

void main()
    gl_Position = projection * model * vec4(vertexPos, 1.0);
    fragmentColor = uvec3(0, 0, vertexColor);

and in the fragment shader I have this:

in uvec3 fragmentColor;

out uvec4 color;

void main()
    color = uvec4(fragmentColor, 1);

and this doesn’t work.
I thought it would b easier to just sample the “color” or whatever… it is now 5 days I am trying to make this work.

Im sorry for my ignorance, but what is “VS input”… and do I have to make the image accept the “integer format” somehow?

Vertex Shader input. The in variables in a vertex shader.

You have to create the image with an integer internal format.

Okay, I finally decided yesterday that I’m not going to use per-vertex-ID… It does seem ridiculous to pass the same integer for each vertex in the object… I thought that would be a simpler approach, but I now see that any approach is not actually simple at all

Actually, what I don’t want to do is pass the ID through attributes, If that makes sense

Now, I am trying to do this: a separate shader program for screen and off-screen rendering, and separate, but same shader variables for ‘projection’ and ‘model’ …and additionally, a selectionID as uniform for that new program. Like this:

        glGetUniformLocation(self.shader, "projection"), 
        1, GL_FALSE, projection_transform)
self.modelMatrixLocation = glGetUniformLocation(self.shader, "model")

        glGetUniformLocation(self.selectionProgramID, "projection"), 
        1, GL_FALSE, projection_transform)
self.modelMatrixLocation1 = glGetUniformLocation(self.selectionProgramID, "model")
glBindFragDataLocation(self.selectionProgramID, 0, "outputF")

then, I would render off-screen image using that selectionID …

So, just before all transformations I would use this:


and just before rendering each object I Would use this:

glProgramUniform1i(self.selectionProgramID, self.codeVarLocation, cube.cube_id)

The vertex shader is the same as the normal vertex shader, just without lighting, textures, etc.
The fragment shader would look like this:

uniform int code;

out vec4 outputF;

void main()
    outputF = vec4(code/255.0, 0, 0, 0);

Does that make sense?
It isn’t working yet… I just get 0 when reading with glReadPixels()

I didn’t know I had to use glUseProgram() before setting the variables for each program… I just thought that it would know which program is being used since all of the functions have a ‘program’ parameter.

I am now getting the right ID when I read using glReadPixels(), but if I use GL_UNSIGNED_INT as the ‘type’ I don’t get 1 for id=1 or 2 for id=2, but I get 16843009 for id=1 which, in binary, is 0b0000 0001 shifted and copied 4 times. So that would be 16843009 = 0b0000 0001 0000 0001 0000 0001 0000 0001. And if I use GL_UNSIGNED_BYTE, I actually get a string or something, like this “b’\x01\x00\x00\xff’”, which is 0b01 for id=1

You don’t need to set the program before calling glProgramUniform*, you do need to call it before calling glUniform*. The above code has some calls to glUniformMatrix4fv, which requires the correct program to be active; if you’re going to be using glProgramUniform* for other cases, you may as well change those calls to glProgramUniformMatrix4fv.

okay, thank you… I have something mixed up there… I’ll look into it, thanks