I have a Geforce 2.
When i call glGetIntegerv to query the max length of a vertex array range, i get 65535. This is confusing, cause it is a very small amount of memory (i get less than 1000 triangles with 4 floats per color 2 floats for one texturecoordinate and 3 float for position into this memorysize).
However i checked nVidias VAR demo and it uses 4000000 bytes of memory with VAR and doesn´t crash.
Why does nVidia set this limit, when they support bigger sizes? Or is there no limit at all and one should never check this value?
It’s only a recommended value for “optimal” speed.
They have a limit of 64K indices in any given draw command, which is what I suspect the number tells you. The vertex array range can contain many more vertices than this, since your vertex data pointers can start anywhere inside the range.
No, this is a real requirement that you can’t ignore…
Note that it is not the max offset into the buffer, it’s the max index value.
So that means the size of my array doesn´t matter as long as i don´t use more than 65535 indices (and that means 65535 / 3 = 21000 triangles) ?
That would only be the case if every triangle had 3 unique vertices, most high tesselation models have just over half the vertices as they do faces. You should be able to descibe meshes over 100k polygons.
This is assuming that you understand that the limit is in the value of the indices (and the number of vertices in the array) and not the number of indices.