Difference between casted vec and ivec

I was trying to apply skinning to a mesh and I used a code sample from the GLTF file format overview (cannot insert link) as help to compute the skinning matrix.
There is something that I dont understand : in the following code, why is the float-to-int cast necessary ?
Here are two versions, with and without cast.
When using the first version, with some float-to-int casts, it works. But with the second code version, using the ivec type for the “joint” attribute, the behaviour is different.

#version 440 core

layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec4 joint;
layout(location = 3) in vec4 jointWeight;

(...)

uniform mat4 jointMatrices[64];

void main()
{
    mat4 combinedJointMatrix =
        jointWeight[0] * jointMatrices[int(joint[0])] +
        jointWeight[1] * jointMatrices[int(joint[1])] +
        jointWeight[2] * jointMatrices[int(joint[2])] +
        jointWeight[3] * jointMatrices[int(joint[3])];

Why can’t we use ivec instead, like this ?

#version 440 core

layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
layout(location = 2) in ivec4 joint;
layout(location = 3) in vec4 jointWeight;

(...)

uniform mat4 jointMatrices[64];

void main()
{
    mat4 combinedJointMatrix =
        jointWeight[0] * jointMatrices[joint[0]] +
        jointWeight[1] * jointMatrices[joint[1]] +
        jointWeight[2] * jointMatrices[joint[2]] +
        jointWeight[3] * jointMatrices[joint[3]];

Who says that it is necessary? You can index arrays with floats; the system will convert them to integers.

Well, yeah. Your VS attribute is no longer a float, it’s an integer. Any apparent differences is probably due to how you set up your vertex attributes in OpenGL. Which you didn’t show.

Sorry for not showing all the code, I thought it might not be relevant.
I set up the attributes using the information from a GLTF file (and a library called tinygltf).

static void setVertexAttribPointerFromGltf(int attributeIndex, const tinygltf::Accessor& accessor, const tinygltf::BufferView& bufferView)
{
    glEnableVertexAttribArray(attributeIndex);
    glVertexAttribPointer(attributeIndex,
        getAttributeNumComponents(accessor.type), accessor.componentType,
        accessor.normalized, bufferView.byteStride, (void*)bufferView.byteOffset);
}

“accessor.componentType” says it is unsigned short.

For attribIndex == 2?

What are:

  1. getAttributeNumComponents(accessor.type),
  2. accessor.componentType, and
  3. accessor.normalized

for attribIndex = 2, before and after your shader change above?

You need to use glVertexAttribIPointer (note the extra “I”) for integer attributes. If you use glVertexAttribPointer, the GPU will be instructed to perform integer-to-float conversion on the data, but the shader will be interpreting the values as integers.

Oh I see. I thought wheither glVertexAttribPointer performed conversion to float or not depended on the type parameter. I will try using glVertexAttribIPointer and I will report on the result.

Well, it won’t perform conversion if the type parameter is GL_FLOAT. The result is always float (more precisely, a vec4), the type parameter tells it what it’s converting from.

I have tried setting the attribute with glVertexAttribIPointer and using it as ivec in the shader and it works as intended. Thanks for the help.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.