I’m not aware of any issues with dl compilation. Are you comparing the same
driver version w/ different hardware or different drivers and different hardware? Matt may be able to offer some insight if you provide more specifics.
I’m in the same situation as Eric. Compiling DL in my prog takes seconds now.
Also with a ELSA Gladiac920, 12.40.
I’m comparing it to,
Diamond FireGL1
3dfx Banshee
Both of them compiled displaylists so fast that you don’t notice it.
Update. I made some some new comparisions and it now looks like it’s rather a typical behaviour of Nvidia drivers than GF3.
Same OS, same driver, testing with GF2 GTS and GF2 MX takes just about the same (long) time.
[This message has been edited by IsaackRasmussen (edited 06-12-2001).]
Cass, thanks for the answer: I thought I could e-mail Matt (or you) directly but I wanted to check if others had found the same behaviour…
I am actually comparing the ELSA Erazor X2 and the Gladiac 920 in the same environment (i.e. same MB, memory, …, and Detonator 12.60 under Win2K).
I don’t really have time to switch back to the X2 (which is already in another computer !) to measure the times but I am pretty sure DL compilation is slower on the new card…
I will try to see if it depends on what I am compiling (the thing happened to me in one of my CFD post-processors and it does 10 million things with OpenGL: I had better trying to narrow the problem down !).
The compilation runs OK with the occasional freeze in the progression (I guess that happens when I have submitted to many vertices: the driver has to reallocate a larger chunk of memory and copy the batch it already had).
The thing that is taking time is glEndList, as I would expect: when it has received all the information, the driver can start optimizing my triangles…
The thing is, I do not know what the driver is trying to do but it takes forever to complete it…
I am going to try the program on the old card (which is on a newer machine…).
The thing I am trying to display is a structural model from STAAD (finite elements package).
These models basically give you a lot of lines that describe beams used in the structures.
What my program is doing is creating the actual beams: it takes the two points given in the model and creates a square-section beam using this axis.
So I am displaying N boxes where N is the number of elements.
Each box has got eigth vertices,six faces (twelve triangles). Each face has its own normal.
Now, as I described above, I was trying to display everything with a large single GL_TRIANGLES call.
Just out of curiosity, I tried to group my triangles into strips: each face of each box can actually seen as a strip containing two triangles…
Well, when using GL_TRIANGLE_STRIP the compilation stays at human scale (not turtle… ).
So, is there something major that the driver is trying to do on GL_TRIANGLES that it does not try on GL_TRIANGLE_STRIP ? (is it trying to strip the mesh itself ???).
I must say I had a big performance boost by using strips, which I did not expect: after all I am just sending 2 vertices less per face this way (i.e. 4 instead of 6) but I call glBegin/glEnd for each face !