# Optimizing glDrawArrays

Hi,

I am using glDrawArrays to draw Quads like this

glVertexPointer(3, GL_DOUBLE, sizeof(QV[0]), &QV[0][0])

where QV is the list of vertices representing quads. Everything goes fine, until the size of QV exceeds the GL_MAX_ELEMENTS_VERTICES_WIN. If I understand correctly, this is the limit imposed on the no of vertices accepted by glDrawArrays.

So, i try to split my vertices into chunks as follows

[b]int MV;
glGetIntegerv(GL_MAX_ELEMENTS_VERTICES_WIN, &MV);

int chunk_size = 10000; int start = 0;

cout<<" Trying to break up “<<QV.size()<<” vertices! limit = “<<MV<<”
";
while( start< QV.size()-1){

``````if( start + chunk_size &gt; QV.size() )
chunk_size = QV.size() - start ;
``````

cout<<" Trying to draw “<<chunk_size <<” elements starting from "<< start <<endl;

``````glDrawArrays(GL_QUADS, start, chunk_size );

start += chunk_size;
``````

}[/b]

The output that I get is

[b]
Trying to break up 2478736 vertices! limit = 1048576

Trying to draw 10000 elements starting from 0
.
.
.
.
Trying to draw 10000 elements starting from 160000[/b]

and the code breaks. If i try breaking up a smaller set of vertices, it works fine, but for this particular dataset, the code always breaks, even if try to increase or decrease the chunk size. I am not quiet sure whats happening ?

I tried passing the vertex pointers in chunks as well,

glVertexPointer(3, GL_DOUBLE, sizeof(QV[0]), &QV[start][0]);

Any help is appreciated. Thank You.
but the code breaks at the same place.

Everything goes fine, until the size of QV exceeds the GL_MAX_ELEMENTS_VERTICES_WIN.

There’s no such enumerator. There is GL_MAX_ELEMENTS_VERTICES.

Also, you say that things are ‘fine’ until you reach this limit, but you don’t say what happens when you exceed it.

If I understand correctly, this is the limit imposed on the no of vertices accepted by glDrawArrays.

Not it isn’t. You might notice the “ELEMENTS” part of GL_MAX_ELEMENTS_VERTICES. As in glDrawElements.

You’re using glDrawArrays. It doesn’t apply to you.

Also, this enumerator doesn’t define a hard limit. It only defines a suggestion for glDrawRangeElements ranges.

Thanks for a quick response Alfonso. I googled about it, and it seems Windows redefine this macro to change it to GL_MAX_ELEMENTS_VERTICES_WIN.

When I exceed the limit, the code just breaks. I have had this problem in drawing every type of primitive (LINE_STRITP, TRIANGLES QUADS etc) and the quick and naive fix I applied earlier was to break the enormous vertex data into small chunks (actually small buffers).

If there is no such limit on the glDrawArrays, why would the code just break after printing the line “Trying to draw 10000 elements starting from 160000”

The OpenGL spec doesn’t define any such limit, but the underlying assumption is of a fully conformant driver. I wouldn’t rely on it, in other words.

Breaking into chunks is one approach that works, yes. You don’t actually need to break into smaller buffers: just use the same VBOs as before but adjust your parameters to glDrawArrays to match your chunk sizes. That way you’ll be more easily able to tune your ideal chunk size to different hardware and use cases. It’ll save you the overhead of switching VBOs too.

Yes there is. It requires GL_WIN_draw_range_elements.
GL_WIN_draw_range_elements isn’t supported by anything or anyone as far as I know.

GL_MAX_ELEMENTS_VERTICES_WIN works because it just happens to be the same value as GL_MAX_ELEMENTS_VERTICES.

Yes there is. It requires GL_WIN_draw_range_elements.

Is that a real extension? It’s not in the registry, nor is it mentioned in the .spec files.

I don’t know. I’ve never seen any spec for any of the GL_WIN extensions.

PS: glVertexPointer(3, GL_DOUBLE, sizeof(QV[0]), &QV[0][0])
which video card is supporting 64 bit vertices?

which video card is supporting 64 bit vertices?

All DX11-class hardware should be capable of it, though with varying degrees of performance.

There is also the issue that VertexPointer should internally convert the DOUBLE to FLOAT. VertexAttribLPointer should get around this limitation, but as you said: it only works on DX11-class hardware.