glDrawArrays shader input / layout question


Hey all!

I’ve problems to understand how i correctly get my vertex-buffer-data into my shader. The data comes as T2F-N3F-V3F-array and i tested various formulations in the vertex-shader, like this one:

layout(location = 0) in    vec2 TexCoord;
layout(location = 1) in    vec3 Normal;
layout(location = 2) in    vec3 Vertex;

In this case, normal- and vertex- data is tranfered correctly if i fill the array in T2F-V3F-N3F-order, but i don’t recieve any texture-data. Also reordering everything seems not to help. Whatever i put in the array as texture-coordinates gets lost somehow.

Any idea what i do wrong?

Thank you,


Not without seeing the client code which sets up the attribute arrays.


Thanks for your reply.

Maybe it’s the point that i still mix up VAO and VBO. I’m still working with VBO(?) without client-specified attrbute-sets. I use normal GL_T2F_N3F_V3F-format.
I’ve a class (mostly for convenience), that manages the array. In this case only the constructor and “void clsVAO :: DrawInterleaved (GLint format, GLint DrawType)” is relevant, which is called width GL_T2F_N3F_V3F and GL_QUADS as parameters. The other functions are only needed for the csv-file-output (the currently commented out “//SaveAsCSV(“TEST_BUFFER.txt”);”-call), that i used this to check the array-data - seems okay, so far: T2F,N3F,V3F with correct T2F-values.

class clsVAO {
        GLint NumV, Format, attrItems, byteStride, bufBytes;
        GLuint      VBO;
        GLfloat     *Data;

        void        CalcSize    (int numV, int format);
        clsVAO      (GLint numV, GLint format = GL_T2F_N3F_V3F, GLfloat* data = NULL); 
        virtual   ~clsVAO       ();

        number      *Open       (int numV = -1, int format = -1, GLfloat* data = NULL);
        void        Close       ();

        void        DrawInterleaved (GLint format = GL_T2F_N3F_V3F, GLint DrawType = GL_QUADS);       

        void        SaveAsCSV   (wxString fNam);

void clsVAO::CalcSize (int numV, int format) {
    NumV = numV;  Format = format; 
    attrItems   = Format <  1000                ? Format
                : Format == GL_V2F              ? 2     
                : Format == GL_V3F              ? 3     
//              : Format == GL_C4UB_V2F         ? 4 * sizeof(GLubyte) + 2 * sizeof(GLfloat)
//              : Format == GL_C4UB_V3F         ? 4 * sizeof(GLubyte) + 3 * sizeof(GLfloat)
                : Format == GL_C3F_V3F          ? 3 + 3
                : Format == GL_N3F_V3F          ? 3 + 3
                : Format == GL_C4F_N3F_V3F      ? 4 + 3 + 3
                : Format == GL_T2F_V3F          ? 2 + 3
                : Format == GL_T4F_V4F          ? 4 + 4
//              : Format == GL_T2F_C4UB_V3F     ? 2 + 4 * sizeof(GLubyte) + 3
                : Format == GL_T2F_C3F_V3F      ? 2 + 3 + 3
                : Format == GL_T2F_N3F_V3F      ? 2 + 3 + 3
                : Format == GL_T2F_C4F_N3F_V3F  ? 2 + 4 + 3 + 3
                : Format == GL_T4F_C4F_N3F_V4F  ? 4 + 4 + 3 + 4
                :                                 1 

    byteStride  = attrItems * sizeof(GLfloat);

    bufBytes    = NumV * byteStride;      

clsVAO::~clsVAO         () {    glDeleteBuffers(1, &VBO);     glBindBuffer    (GL_ARRAY_BUFFER, 0);  }

clsVAO:: clsVAO         (int numV, int format, GLfloat* data) {

    CalcSize            (numV, format);    
    glGenBuffers        (1, &VBO);
    glBindBuffer        (GL_ARRAY_BUFFER, VBO);
    glEnableClientState (GL_VERTEX_ARRAY);
    glBufferData        (GL_ARRAY_BUFFER, bufBytes, data, GL_STATIC_DRAW);

void clsVAO::DrawInterleaved (GLint format, GLint DrawType) {
    glBindBuffer        (GL_ARRAY_BUFFER, VBO);   
    glInterleavedArrays (format, byteStride, NULL);
    glDrawArrays        (DrawType, 0 /*start*/, NumV);     

    glBindBuffer        (GL_ARRAY_BUFFER, 0); 

number* clsVAO::Open    (int numV, int format, GLfloat* data) {      

    glBindBuffer        (GL_ARRAY_BUFFER, VBO);
    glEnableClientState (GL_VERTEX_ARRAY);

    if (format > 0 && numV > 0 && (numV != NumV || format != Format))  {
        CalcSize        (numV, format);    
        glBufferData    (GL_ARRAY_BUFFER, bufBytes, data, GL_STATIC_DRAW);
    return              (GLfloat*)glMapBuffer(GL_ARRAY_BUFFER, GL_READ_WRITE);

void clsVAO::Close      () {
    glUnmapBuffer       (GL_ARRAY_BUFFER);        

void clsVAO::SaveAsCSV (wxString fNam) {
   wxString ret; ret.reserve(bufBytes * 10);       GLfloat *fp = Open();
    for     (int vi = 0; vi < NumV;      vi++) {
        for (int ai = 0; ai < attrItems; ai++)      ret += tostrp(*fp++) + ";";     ret += "
    wxFile (fNam, wxFile::write).Write(ret);        Close();


…but also witout “layout (location=…)” in the shader (just declaring the attributes in the order as they are transfered), doesn’t change the result, that the tex-coord gets lost somehow.

By the way: i’ve another rendering in the same program only rendering one quad for post-processing (trough-vertex-shader but complex fragment-shader). There i used glBegin() … until today. But i also recieved no texture-coordinates in the vertex-shader. There (needing only four vertices) i used gl_vertexID in the vertex-shader as workaround. The data is transferred to the fragment-shader as it should.

I also have a working geometry-shader, that i can switch on and off. Also here data-transfer works correct.

Finally: Since some weeks i’ve a new notebook. On the old one all this worked but it was on intel onboard gpu. Now i’ve a modern nvidia. I knew that this would result in a little bit of debugging to get my program to work there…


If you’re using built-in (legacy) attributes (whether via vertex arrays or glBegin/glEnd), you need to use the corresponding built-in input variables (gl_Vertex, gl_Normal and gl_MultiTexCoord0 etc) in the shader. You can’t rely upon these attributes being given specific locations (other than gl_Vertex having location zero). You can’t assign locations to built-in attributes with glBindAttribLocation() or [var]location=[/var] qualifiers.

You can use interleaved arrays with modern code by using separate glVertexAttribPointer() and glEnableVertexAttribArray() calls for each attribute, with all attributes having the same stride but different offsets. E.g.:

glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, byteStride, 0*sizeof(GLfloat)); // TexCoord
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, byteStride, 2*sizeof(GLfloat)); // Normal
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, byteStride, 5*sizeof(GLfloat)); // Vertex


Thank you.

I was absolutely not sure that i nessecairly have to use glVertexAttribPointer…

To implement this will be a lot of work, because i’ve to do several changes. I’d rather done this step by step, but i think at this point i have to do this at once.
I’ll start this now and i hope that i can implement this without further questions. I’ll infom you if it’s done…


I would follow GClements’ recommendation as it is both portable across GPU vendors and considerably more flexible (supports more attribute formats and packing permutations) than glInterleavedArrays().

However, on NVidia GPUs only, you can mix old-style vertex array setup on the C++ side with new-style attributes in the GLSL shader if you give the attributes the correct location numbers in the shader. This because old-style vertex attributes are mapped on top of new-style vertex attributes (on NVidia GL drivers). This is termed “vertex attribute aliasing”.

For the mapping between old-style vertex attribute names and new-style vertex attribute numbers, see Table X.2 in the NV_vertex_program extension. TL;DR: Use location 8 for TEXCOORD0, 2 for NORMAL, 3 for COLOR, and 0 for POSITION. You could try that small change just to get you to a testable, working state. But past that, I’d still advocate for ditching the old glInterleavedArrays() call and just use glVertexAttribPointer().


Okay, here is already the first question…

For the beginning (and also because it may be the better for my application) i want to use only one location with - in this case - eight indices. This is the same way i’ve used to realize the inputs for transform-feedback. So i thought that something like the following would do it:

    glBindBuffer(GL_ARRAY_BUFFER, VBO);

    glVertexAttribPointer(0, 8, GL_FLOAT, GL_FALSE, byteStride, 0*sizeof(GLfloat));

    glDrawArrays        (DrawType, 0 /*start*/, NumV);     

…with vertex-shader:

layout(location = 0) in    float SRC[8];

vec2 TexCoordIn (SRC[0], SRC[1]);
vec3 NormalIn (SRC[2], SRC[3], SRC[4]);
vec3 VertexIn (SRC[5], SRC[6], SRC[7]);

Unfortunately the application crashes immediately in the moment when i try to start the rendering. So i think there is missing something like “glMapBuffers”, “glBindBufferBase” or like “glInterleavedArrays”.

Do yo have a hint?


I would follow GClements’ recommendation…

Yes, finally this will be better. So i’ll do it now…

I found out one more hint concernig the crashes. Currently i use this as vertex-shader (the matrices are treanfered with system-generated uniforms - so the uniforms do not appear in this source)

layout(location = 0) in    float SRC[8];
//layout(location = 1) in    vec3 Normal;
//layout(location = 2) in    vec3 Vertex;

out Data {
    vec4 Vertex;
    vec3 Normal;
    vec4 TexCoord;
} Out;

void main () {

    Out.Normal      = vec3(SRC[2], SRC[3], SRC[4]);//normalize (mat3(projMatrix)* Normal);
    Out.Vertex      = modelMatrix * viewMatrix *vec4(SRC[5], SRC[6], SRC[7],1);//,1);
    Out.TexCoord    = vec4(SRC[1],SRC[1],1,1);
    gl_Position     = projMatrix * modelMatrix * viewMatrix * vec4(SRC[5],SRC[6],SRC[7],1);

As you maybe see i don’t use SRC[0] here. The point is: as long as i don’t use SRC[0] the program doesn’t crash, but i don’t get a result…


This is invalid. the size parameter must be 1, 2, 3, 4, or GL_BGRA. The call should fail with GL_INVALID_VALUE.

This is valid, but it would use 8 consecutive locations, each of which needs separate glVertexAttribPointer() and glEnableVertexAttribArray() calls. If you really need to avoid separating the variables, you could use a mat2x4. This would occupy 2 locations (one for each column) and thus require 2 glVertexAttribPointer() and glEnableVertexAttribArray() calls.



I forgot this.