Display list performance

do what fresh suggests above + break the list up into smaller lots

I am feeding in a 361x361 array which is generating about 250’000 polys but I’ve always been told about the massive power of the GeForce based cards so what gives? Any suggestions would be welcome, I’m not using GLUT either I working through windows.

Well, you’re using a GeForce 256. Even using NV_vertex_array_range (the fastest way to send triangles to the card, faster than display lists) and even if they’re untextured, you’re probably not going to break 8 million polygons per second.

And, of course, as the others said, smaller display lists are better.

Hey, this old thread just get revived from hell :slight_smile:

So, well, my pb got solved a long time ago in another thread. The pb wasn’t in the DL compilation, nor in the way I used it (of course, the DL was compiled only once during the whole app life, and then reused…)
The pb was in glPolygonMode.
I used to do :
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK , GL_LINE);
This appears to be slow (yes, backface culling was enabled, so no ‘line’ at all were drawn actually).
As soon as I switched to glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); (still with culling), I got a performance boost
This answer was given in another thread
You may have the same pb as I had.

Thanks Amerio,
but that was not the problem, I did not use glPolygonMode. Even when I tried to use it
like what you suggested with these 3 command
glEnable( GL_CULL_FACE );
glCullFace( GL_BACK );
, it did not seems too speed up my case.
Can you point out which thread you refered to?
Also I wanted to post my comment on a newr thread (GeForce3 display lists compilation slower ? ), but after registering I did not check it carefully that I ended up posting it here, I found it out later but I did not want to make a double post. No answer on that thread either.

For zed,
yes I tried to break the model manually and
found out that if I only use a quarter of that size the list compilation time drops from
28 sec. to 1 second. So I know that if I chop them to several smaller list I will get acceptable result on this computer. It will still take some experiments to guaranty that it will be fast enough on other GeForce cards.

But I still believe that this is a problem with nvidia, because as it was mentioned in another thread it only affects nvidia cards.

Also it is hard to explain why on a slower computer with the same card it only takes less than 6 seconds (as compared to 28 seconds),
or how do we explain that on some other cards (GeF 2 GTS and the 256) it takes around 100 seconds.

Does anybody know if there is any discussion / article /knowledge base on what is causing this problem (related to NVidia based cards only as far as I know).