I suspect I’m running into a driver issue here, so I hope I put this in the right forum. I’m trying to tune performance for a new screensaver that draws implicit surfaces with around 8000 vertices at most (VBO size < 200,000 KB). They’re broken down into tristrips with an average size of about 4.001.
For vertex array draw:
glDrawElements(GL_TRIANGLE_STRIP, triStripLengths[i], GL_UNSIGNED_INT, &(indices[start_vert]));
For VBO draw:
glMultiDrawElements(GL_TRIANGLE_STRIP, (const GLsizei*)(&(triStripLengths)), GL_UNSIGNED_INT, (const GLvoid**)(&(vbo_index_offsets)), num_tristrips);
I draw each surface between about 5 and 20 times, so I expect VBOs to outperform vertex arrays by quite a bit. On a Quadro FX 3400 under WinXP I get about 40-50% better performance with VBOs. The same is true of a 6600GT under WinXP and Fedora 11.
However, I get about 50% better performance from vertex arrays using a 8800GTS under Windows Vista and Fedora 7 and about 100% better performance on a 8800GTX under WinXP.
Anyone know of any hangups with the 8800 hardware or drivers? This behavior is very inconsistent with the older NVidia cards I have tried.
As an aside, upgrading to the newest Windows driver (197.45) for both XP and Vista adds a new vertex array performance problem: I get about 10 slow frames (0.5-1Hz) when I start the screensaver before they suddenly start rendering at full speed (15-30Hz).