i tried to modify my original post but something went wrong… nevermind, here is the original post.
I’m getting strange results when using nvtristrip and having it generating an optimized trianglelist, I seem to get the best result when using an effective size of 8
Here are the results of a fx5200.
The model is just a .3ds model i got from 3dcafe. It displays the same model 5 times.
Total triangle count per frame is about 190000 and i’m rendering with 8 pointlights
i’m using VBO for the modeldata and the indices are stored in sysmem.
Original model 15.0fps ( 100% )
VCS = 4 31.0fps ( 207% )
VCS = 6 33.0fps ( 220% )
VCS = 8 35.0fps ( 233% )
VCS = 10 33.0fps ( 220% )
VCS = 12 31.0fps ( 207% )
VCS = 14 31.0fps ( 207% )
VCS = 16 29.0fps ( 193% )
VCS = 18 29.0fps ( 193% )
VCS = 20 29.0fps ( 193% )
VCS = 22 27.0fps ( 180% )
VCS = 24 27.0fps ( 180% )
When tested on differend cards i get the same results( a radeon9700 or gffx5900 also maxes out when using a size of 8 ) The framerates are way higher offcourse.
I’m using tristrip like this:
SetCacheSize( Size + 6 );
SetStitchStrips( false );
SetMinStripSize( 0 );
SetListsOnly( true );
The bottleneck is really in the T&L/VS part, changing the res doesn’t change the fps at all, disabling lighting makes the fps shoot through the roof so no agp limited and i’m not cpu limited as well (I can run several different apps simultaniously and the framerate stays the same)
I’m really at a lost now… if anyone can give me some pointers please do so, is it nvtristrip? is it because i’m using trianglelists? i’ve checked everything else and i really don’t know where to go from now…