VBO problem

Dear forum

Im tring to write a very simple programm using Vertex Buff Obj VBO, using Eclipse & Mingw.

Many other think work already (vertex array, displayList ecc), but for VBO i need extension.

So, I download the glext, include all necessary,link to the lib and so. At end the compilation dont give errors.

But, my sw dont work. So, i dont know if problem comes from my sw or from glex.

Part of my code is here (just draw a triangle) …maybe looking only at this partial code is possible to see some error:
I just included the more important part in main() & render() function.

GLfloat VertexArray[]= {0,0,10,0,0,10,} ;
GLuint BufferOBJ[1];

int main(int argc, char * argv[])

 // generate a new VBO and get the associated ID

// bind VBO in order to use
glBindBuffer(GL_ARRAY_BUFFER, BufferOBJ [ VertexInd ] );

// upload data to VBO
glBufferData(GL_ARRAY_BUFFER, 6, VertexArray, GL_STATIC_DRAW);

glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array

   return 0;


void render(void)
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array

       glBindBuffer(GL_ARRAY_BUFFER, BufferOBJ [ VertexInd ] );

      glVertexPointer(2, GL_FLOAT, 0,  0);

     glDrawArrays(GL_TRIANGLES, 0, 6);



Thanks for help.


Not sure how you defined glGenBuffers and the others. Just to make things easier with GL extensions, try using GLEW http://glew.sourceforge.net/, it handles all extensions in a cross platform manner, very easy.

You don’t need to call glEnableClientState and glVertexPointer every frame, they only need to be called once in this case. I don’t see why you would need glFlush just to render a triangle either. Also consider adding an index buffer and using glDrawElements, an index buffer will be needed for drawing more than just simple primitives.

This is your problem:

glBufferData(GL_ARRAY_BUFFER, 6, VertexArray, GL_STATIC_DRAW);

The second parameter to glBufferData is not the number of elements in your source data, it’s the size (in bytes) of your source data. So here you’re defining a buffer that’s only sized at 6 bytes, you end up not loading all of your data, and you end up reading past the end of the buffer with your draw call.

What you want is “sizeof (VertexArray)” instead.

Even with that you still have another problem:

glDrawArrays(GL_TRIANGLES, 0, 6);

The third parameter to glDrawArrays is the number of vertices to draw. Since your glVertexPointer call sets up each vertex to have two floats, and since you have 6 floats in your source data, that means that you’re drawing 3 vertices, not 6.

A third problem is that you seem to be using a single-buffered GL context via GLUT. DON’T DO THAT. It doesn’t matter that online tutorials (I’m guessing you’re using NeHe here) do it, it’s still bad. Create a double-buffered context instead: it’s just a matter of using GLUT_DOUBLE instead of GLUT_SINGLE then using glutSwapBuffers instead of glFlush. There’s nothing difficult or advanced about using double-bufered contexts, they’ll give you better hardware and OS compatibility, and NeHe needs to die in a fire for propagating this nonsense.

Thanks mhagain for your reply.

As you said, problem was in the sizeof .Just for say something, write 3 or 6 or 16 in the glDrawArray work the same ,but 3 as you said have more sense than 6 .

Thanks very much because was almost 24 hour tring to work with glew,and for a not expert like me was at end impossible.

Glext worked soon,also on Eclipse with MinGW (and of course VS).

Thanks again a lot.


With hardware transform and lighting I would expect them to work. You’re still overflowing a buffer, but the hardware can swallow that benignly. Worst case is you’re burning some extra performance.

If you had software transform and lighting (or if your GL driver bumps you to a software fallback) it’s most likely going to crash hard.

Either way you should still send the correct parameters. Don’t use “it works on my PC” as an excuse to do things the wrong way.