Strips and degenerated triangles

Hello everyone. I’m using the NVStrip library and I’ve came across this strange behaviour. On a GeForce 4 4600, with a resolution of 640x480 (with Quincux AA) my app runs at 18 fps without strips and 20 fps with (with ~460 KTris rendered). When the resolution is 1280x1024, my app runs at 17 fps without strips and 16 fps WITH (and not the other way around) !!! My geometries are rendered with VAR on Video memory (128Mb available). The thing is, using the nvstrip library creates a lot of degenerated triangles which increase the overall amount of triangles sent by approx 10-15%. Could this degenerate triangles be the explanation to the weird relative drop in performance when using the strips with a high resolution?

Thanks in advance for your advice!

[This message has been edited by Olive (edited 09-20-2002).]

How are you sending degenerate triangle to the 3d card? via glBegin(GL_TRIANGLE) ?

Arath

Nope, with glDrawElements. Why ?

With glBegin()/glEnd() pairs, it’s pretty slow especially when you’ve got many degenerated triangles (that’s your case). But you don’t use it.

Arath

Somebody told me that this may be due to the fact that degenerate triangles at low resolutions are not rasterized because of their zero surface while at high resolutions, imprecisions due to the usage of VAR (??) yield non zero surface triangles in which case even degenerate triangles are rasterized.

Does that sound possible?

It’d be weird, but doesn’t sound impossible. You should try with standard Vertex Arrays and see if the framerate drop still occurs.

If the drop does not occur with VA, that means that VAR are causing the problem.

If the drop does occur with VA too, that means that both VA and VAR are processed the same (at least, the results show no difference).