Strange performance

I have FX5700 with 66.93 driver under win2000. The scene consists of 60,000 trinagles, each with normal vector and texture coord. When i use glVertex/glNormal/glTexCoord i have 53 FPS. This is 3M triangles per second - quite normal, because i have almost 10M API calls per frame. Then i grouped my scene in 100 batches, each with 600 triangles and used glDrawElements for each batch - this is less than 1000 API calls per frame. For my great surprise, the performance dropped to 50 FPS! Then i tried using VBOs, and performance dropped to 48 FPS!?!?

Obviously, something is very wrong. Have you seen such problem before? Any fresh ideas?

Thank you.


Update your driver. VBOs were young when yours came out.
Check your AGP chipset drivers in the device manager.
The NVIDIA OpenGL renderer string must contain “AGP” on an AGP board.
Check your system BIOS AGP aperture size. Try different sizes.
Use float attribute data (not doubles).

Yes, i use GL_FLOAT and GL_UNSIGNED SHORT. Lets forget about VBO - i just can’t understand why the simple glDrawElements (1000 API calls per frame) is working slower than glVertex (10M API calls per frame), when the data sent to the card is one and the same??? It should be at least several times faster.

FX5700 can handle a little bit more than 3M tri per second, isn’t it? For example, Doom3 is running just fine with all shadows on.

Hard to say what the problem is when not knowing the code and data.
Other hints:

  • Tried new drivers?
  • Vysnc off?
  • Are you sure that your fps measurement is accurate?
  • Have you analyzed your bottleneck?
    CPU? Use a single pixel scissor, switch face culling to front_and_back.
    Fillrate? Use a smaller window
    Textures? Switch off or use a 2x2 texture for everything.