Cube mapping disabling VAR...

This is my first post here. woo

I haven’t found any mention of this anywhere, so I hope I’m not bringing up old news…

I’ve found a really weird error occuring in my terrain renderer just today.

I added real-time cube mapping for fun the other day, and found I was getting a pretty significant slow down when actually using the cube maps. (instead of just rendering the views)
I did all sorts of things, but only found I got back to normal performance when I disabled Vertex Array Range.
I was pretty shocked by this and tried to fix it. I couldn’t. Even flushing the array and disabling VAR before any cube map usage didn’t seem to work…

This realy annoys me because I get around twice the performance when using VAR, yet if I use it with cube mapping, I get half the performance… so its about 4 times slower.

Does anyone get the same problem? does anyone know a work around?

I’ve got some pics, but it seems I can’t post em…

This pic is with VAR is ‘activated’

This is with VAR disabled,

Btw: I’ve also found that using Compiled Vertex arrays with VAR will give a speed boost only when using small array strides…


What type of hardware are you running on?

Thanks -

Lol. Its meant to be RipTorn opps

I’m running a 32mb GeForce 1 DDR (WinFast brand) using the Nvidia 12.41’s… i’m on a VIA apollo 4x AGP motherboard chipset under the GigaByte brand, so no fastwrites or anything, with a 533 Pentium III… and 256mb of ram…
Nothing is overclocked…
The AGP size (the name escapes me at the mo ) is set at 64mb…

[This message has been edited by RIpTorn (edited 06-12-2001).]


There is an issue with the texture matrix in GeForce256 that causes software fallback under some circumstances.  As mentioned in the vertex array range whitepaper, any software fallback is particularly painful when using VAR because the driver has to read from uncached memory.

If you can avoid using rotations in the texture matrix, everything will remain fully harware accelerated.

There is a related issue that comes up when using a 2D projective texture matrix on GeForce256. See my post on the thread titled “Does the GeForce DDR accelerate fully general texture matrix setups ?” for details.

This issue exists ONLY for GeForce 256 DDR & SDR) and Quadro hardware. No other GeForce class hardware is affected. Specifically, no variety of GeForce2, GeForce3, Quadro2 or Quadro DCC is affected.

                                                                                                    Thanks -


I had a crash occuring on a Geforce 256 SDR, when rotating the texture matrix in the Y axis, using display lists a while back. Was gonna send it to Matt to find out why, but never got round to it. Dropped the display lists in the end I believe, and it fixed it.