hello, it’s me again…
Now i have tested my GL_ARB_vertex_program code on several machines; the “hardware-pool” differs from a Geforce2-MX2 up to a Geforce3-Ti500 (note: after the installation of the actual nVidia drivers, the GL_ARB_vertex_program-extension is supported by all of this cards!!!)
The contains about 10000 tri’s, drawn with standard function calls (the code is unoptimized):
on the Geforce2-MX i get around 60fps without the usage of GL_ARB_vertex_program
but with the enabled vertex-program, which code does nothing else than multiplying the vertices by the actual matrix and assigning the texture-coordinates, i get only 40fps !!!
Is this the normal case ??
On the Geforce3-Ti500, i get around 130 fps without the usage of GL_ARB_vertex_program
but, again, with enabled vertex-program the framerate falls back to a value around 120fps. (as you can see: the leak is not as big as it is when running with the Geforce2-MX card)
Is a framerate collapse in this dimensions normal ?