glVertexAttribDivisor and its index input

Hi, guys,
I am trying to make use of glVertexAttribDivisor in my OpenGL instanced drawing.

It works in NV card, but it doesn’t work in ATI card. Nothing is drawing.

From GlExtensionViewer, it shows both of these cards supports glVertexAttribDivisor/ InstancedStream. There was no error at running.

I don’t know if this is due to my wrong usage.

I put the instance data in a separate vertex array buffer, then map it into gl_MultiTexCoord0~3. The instance data is the world matrix.

Code is here.

    for( int i=0;i<3;i++)
    {
        glClientActiveTexture(kGL_TEXTURE0 + i);
        glTexCoordPointer(size, type, stride, i*4*sizeof(float));

        int instanceVertexAttribIndex = i + 8;
        glVertexAttribDivisorARB(instanceVertexAttribIndex, 1);
    }

The key issue is what the right “index” should I give to glVertexAttribDivisorARB if I try to put the instance data on gl_MultiTexCoord0?

It works on NVIDIA cards because NVIDIA is not implementing the OpenGL specification properly.

glVertexAttrbDivisorARB only works on generic attributes. That is, user-defined shader attributes. It does not work on any attributes other than those specified by glVertexAttrib(I)Pointer.

NVIDIA has long implemented the behavior of attribute aliasing. That texture coordinate zero also has the attribute index 8. That the vertex position has the attribute index 0.

The problem? The OpenGL specification does not allow this. It specifically requires implementations to fail and give an error if you try it. When you call glDraw* with these arrays set up, your implementation should give you an error.

Sadly, you got caught in the NVIDIA trap: using non-spec behavior that just so happens to work on their drivers. I imagine you must have gotten the idea from some NVIDIA paper. So now you have to go change all your code to use user-defined attributes instead of the built-in ones.