I am 100% sure that it’s perfectly possible to draw more than 64K vertices per call with glDrawElements() (with or without VBO). I’m using a GF3 with the 52.16 drivers, but I don’t recall this ever being a problem with older drivers either.
There is a hard limit of 64K vertices when using VAR on GF2-class cards, but this limit was raised to 1M vertices on the GF3 and up. It also does not exist when not using VAR.
There is also a recommended maximum index/vertex count for glDrawRangeElements(), but it is nothing more than that: a recommendation. This number is 4096 for all GeForces. For Radeon 9x00 cards, it’s 64K indices/2M vertices.
Your new version consistently reports between 5 and 8 MTris/sec on my machine. The numbers look more reliable, but they also look low. I still don’t trust your timing code, though. Could you explain exactly what you’re measuring to produce all these numbers? :
Your new version consistently reports between 5 and 8 MTris/sec on my machine. The numbers
look more reliable, but they also look low. I still don’t trust your timing code, though. Could you
explain exactly what you’re measuring to produce all these numbers? :
You have the source, and here are the measured loops. It is soo simple basic opengl usage
I really wonder if there is anything that can be made more straightforward. AND it works as expected on
ATIs.
Tom, Zengar, thanks for the responses, I think I have got close to the bottom of this mess.
I have made a small web presentation with the measured relation between the CPU usage by the
ie. game engine and the GPU triangle count per second (without rasterization).
(with the source & high res graphs included in the archive at the bottom of the page)
http:\kickme.to\speedy1
Zengar, tri counts are low because I am not optimizing indices in the test case
(they are 012345678…) and you could be using 52.16 drivers…