another VBO question


Are VBO limited in size ?
I dont manage to find it in the spec but my english is so poor.

I explain my problem :
I have two VBO, one for normals, one for vertices.
When I use 120000 float for each (40000 vertex),display look good
with 196608 : application crash
with 270000 : 30% of the model appear but the first render look good (when i disable the cleaning of the screen).
size of vbo stay constant.

for more than 196608 float, if in the init, i dont relase data and in the display, i send the data in the vbo. it works but with 2fps. I get 22fps using VA and 85 with display list (Vsync on).

code for init :


// Generate And Bind The Vertex Buffer
glGenBuffersARB( 1, &vbovertices );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, vbovertices);
glBufferDataARB(GL_ARRAY_BUFFER_ARB,nbfloat*sizeof(float),vertices, GL_STATIC_DRAW_ARB);

//Generate And Bind The Normal Coordinate Buffer
glGenBuffersARB( 1, &vbonormals );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, vbonormals );
glBufferDataARB( GL_ARRAY_BUFFER_ARB,nbfloat*sizeof(float), normals, GL_STATIC_DRAW_ARB);


Display code :

glBindBufferARB( GL_ARRAY_BUFFER_ARB, vbovertices );
glVertexPointer( 3, GL_FLOAT, 0, BUFFER_OFFSET(0) );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, vbonormals );
glNormalPointer(GL_FLOAT,0,BUFFER_OFFSET(0) );

AthlonXP1900/WIndowsXP/Geforece3ti200/driver 52.16

you could try and map the buffer after creating it (or just check for opengl errors, that should hopefully tell you if it worked).

the problem could simply be, that you try to render to much in one call. i remember some cards that dont like indices above 65k and if they have other similiar restrictions in other places you could try to render only small parts of it. problem with vbo is that they sometimes do very strange things with different cards/drivers. some simply fail to allocate more than a certain size in vram etc.

but back to mapping, you could map the buffer and compare the values with the original array.

I will check this.
first pass is ok, but after data seem to be lost.

I will check this.
first pass is ok, but after data seem to be lost.

Add code and screenshot.

low poly : VBO looks good.

High poly : VBO is buggy

I dump the VBO in a file. data are not corrupted.

feel free to download the code (2,45Mo) to test if the bug come from my pc or me

hit space to switch between the two model.
hit e to dump vbo in a file.

I use a VC7 beta to compil the code.

I wait for your feedback. I know that the code is a bit ugly (except the basecode by nehe of course ! )

Are all your indices in the [0-65535] range ?


The first model is in the range, the ‘high poly’ one not.
I made other test and this is the problem.
I have feedback and the code seems to work well for card >= Geforce4
For Geforce3Ti200 the limit seems to be 65535.
This should be written in the spec.

And that’s a pity that no error is raisen like ‘GL_VERTEX_BUFFER_OVERFLOW_ARB’ or ‘GL_UPGRADE_YOUR_CARD_PLZ_ARB’ in glGetError.

Thx for those who have tried to help me.

It’s not written in the spec because it’s video card dependant. There’s a function to get the max amount of vertices per call, it’s your own fault if you didn’t use it.


OK. Which function must i use ?

After I load data in the VBO i check the size with
VBOsize is always the same as data size.

I dont find a GL_BUFFER_MAXSIZE flag or eq in the spec.

its not a limit to the buffer size but a limit to how many vertices you can “draw” in a single call. dont know if that limit happens to always be the same as the max index but in your case the case simply cant handle indices bigger than an unsigned short or (as you dont seem to use indices) more than 65k vertices in a call.

so its not really a problem with vbo.

[This message has been edited by Jared (edited 02-19-2004).]

I think it’s an OpenGL 1.2 attrib. Try glGetIntegerv with GL_MAX_ELEMENTS_VERTICES or GL_MAX_ELEMENTS_INDICES.


Maybe i dont understand you very well but
it works fine with the same amount of vertices, with simple vertex array (one call to glDrawArrays).

If in each frame, i load data in the vbo, it works.

Anyway, I can’t handle VBO more than 65535 vertices on my card.

it works fine with the same amount of vertices, with simple vertex array (one call to glDrawArrays).

And if you try glDrawElements, what happens?

If in each frame, i load data in the vbo, it works.

That’s weird, it sounds like your VBO data becomes corrupted. Memory leak ?

Anyway, I can’t handle VBO more than 65535 vertices on my card.

Very likely, i think only GF4+ can handle more.


It seems like a memory leak but when i dump the vbo in a file, data are okay.

hm… do i understand that correctly, that it seems like something is overwriting part of the buffer every frame and uploading the data each frame makes it work? at which point are you dumping the vbo to disk? or is that data correct all the time and just drawing doesnt work? is anything else accessing or using vbo?

check for that max vertices whatever value. if its 65k just think “ok, its pure luck that va is working”.


The almighty spec sez
Implementations denote recommended maximum amounts of vertex and index
data, which may be queried by calling GetIntegerv with the symbolic constants
greater than the value of MAX ELEMENTS VERTICES, or if count is greater than
the value of MAX ELEMENTS INDICES, then the call may operate at reduced performance.
The implementation is not allowed to fail because of this.
It’s a driver bug, plain and simple.

That is interesting. Because on a TNT2, Geforce 2, and Geforce FX, I have been able to draw VBOs with 600K vertices without any problems.

I use glDrawArrays. It reads like a mem issue.

I got 4096 for MAX_VERTICES and MAX_INDICES but i dont use glDrawElement, i use glDrawArray.
And it works fine even on a i810 chipset.
I’ll try to update my driver

For maximian : Have you got one demo that i could test ?