I have 2 computers:
- PIII@600Mhz with GeForceFX5200 128bit 128MB
AGP2X (detonator 56.54)
- AMD Thunderbird@1400Mhz with GeForce 256DDR
AGP4X (detonator 56.56)
Here is how I set up the vertex arrays:
glInterleavedArrays(GL_C4UB_V3F, 0, dataBuffers); glDrawElements(GL_TRIANGLES, (heightTex.Width()-1) * (heightTex.Height()-1) * 6 , GL_UNSIGNED_INT, (GLuint*)indexBuffers );
I have I have 128128 vertices and 127127*2 triangles. I do not have texturing or lighting.
I get 72 fps on the first system and 95 on the second system… why ?
The data transferred is small, so the AGP or fillrate don’t count… The application is clearly transform limited, as I get the same framerates with all framebuffer sizes.
Wasn’t the 5200 supposed to be faster than the geforce256 (at the T&L)?
Am I doing something wrong or the 5200 is so slow ? (I know it isn’t the fastest card out there but I did expect better performance than a geforce256. I have exchanged my old 5200 64bit with a new 5200 128bit because the 5200 64bit was seriously fillrate limited)
I hope I am not off topic…