Indices for indexed primitives with VAR

I was playing around with nvidia’s VAR extensions on a GF3 Ti200 and I noticed this:
If I store my indices (I’m using indexed triangles) as well as my vertex data in AGP memory, I end up with a lower framerate than if I’d used system memory for everything.
If I store the indices in system memory and the vertex data in AGP memory, then I get the expected increase in framerate.
Is that the usual behavior for VAR?
If so, is there any better way to store the indices in AGP/video memory for better performance?
Thanks again everyone!

Read NV´s VAR spec. It says:

And think of it, it´s absolutely logic, cause you (that means your program), are not supposed to use the cards memory for any other stuff then textures and vertex data. You only tell the card which vertices to use. If those vertices are on the card, it will be faster. But glDrawElements is still performed on the CPU, that means outside of the graphics card. So if it has to work with indices, that are inside the graphics card, that it has to read those values and reading from AGP or Video memory is slow.


I believe there will be an extension to VAR for index arrays soon.

Originally posted by knackered:
I believe there will be an extension to VAR for index arrays soon.


its called NV_element_array and will be in hw on GeforceFX. Its available currently in the NV30 emulation drivers. On htt:// is the preliminary spec for it.