In my computer game, I use a LOT of primitive data (> 4500 GL_QUADS); and since my env looks like a swiss cheese, I have to render them all, w/out doing any hidden/cull tests for further anims (no, bsp won’t help.).
They are pre-rendered in a VertexArray, and the display routine basically consists of a single glDrawArrays (GL_QUADS, 0, ca.18000);
The init is dynamical; I can select the libGL.so on runtime.
The X<->GL init goes straight-forward with SDL stuff.
Once setup, it runs on my voodoo2 (XF86 4.1, SDL 1.2, Mesa) with a nearly-depressing 20 fps. I took it to two friends of mine, both w/ same cpus as mine (athlon 700MHz), one w/ GeForce2, one w/Erazor3(<-is that right?), both running the NVidia drivers. On both, the framerate drops under 10 fps at the worst!!! (on the two NVidia systems, it doesn’t make a difference whether run in Fullscreen or not)
Do I by any chance pipe glDrawArrays not directly to the graphics hw, but via X instead? If so, how do i turn it OFF? How do I analyze this?
[glDrawArrays produces roughly 1M traffic per call; see above: 18000 primitives * 64b alignment.]
On the other hand, the above may be completely wrong…(NO IDEA!!)
I am desparately working on this for one week now. Although this topic would as well fit into the forums of Nvidia, OpenGL, SDL, and XFree86, I grasp for every help I can get…