I went from not using display lists, to using display lists, and my fps went from 120 down to 54 fps. Talk about a performance hit. I was wondering if this is normal, or what? This is with a Rage 128 on win2k.
I only have 2 objects, and 12 textures, 1 model is complex (about 3500 tris) and the other is a basic cube (12 tris). Textures size = 6@64x64x32, and 6@128x128x32, and the screen size is only 800x600x32.
I was wondering if anyone knows a certain call to stay away with when using display lists, or what is the deal with display lists ?
Never use GL_COMPILE_AND_EXECUTE, for some strange reason (probably lazy drivers) it is far slower than GL_COMPILE, then executing.
Display lists take up loads of memory, so that could also slow things down.
It turns out if I run the same demo on a win98 system, I get my fps back. So it looks like the win2k drivers need some major work.
Also, if you’re re-building your display
list every frame, then there is probably
some other mechanism which would be better
(such as using vertex arrays).
I used to use that display lists for font technique. I dropped display lists and had NO change in FPS(with a full page of text). I also dropped using ti to package state changes(about a dozen) and still no difference, I’m going vertex arrays next…
Originally posted by Elixer:
[b]It turns out if I run the same demo on a win98 system, I get my fps back. So it looks like the win2k drivers need some major work.
I suppose that by the time of Win2000, the boys in Redmond had decided to back up directX and Xbox by messing up gl-drivers. That is the power of having the control over OS.
Then again, maybe low fps is due to more secure NT architecture?
No, there is no reason whatsoever that OpenGL drivers for NT4/Win2K need to be slower than OpenGL drivers for Win9x.
I’m having a somewhat similiar problem… I’ve written an app that runs about 30-60 FPS depending on what’s in the frustum… if nothing is in the frustum, it runs at exactly 100FPS MAX. This is the result I get on a 1GHz AMD Thunderbird using either a Matrox G450 AGP, a TNT2 M64 PCI, or a Voodoo3 3000 AGP. However, I have a friend who runs my app on his 600MHz Athlon (that’s right, Athlon not T-Bird) with a Voodoo3 3000 and when nothing’s in the frustum, he gets exactly 500FPS. When actually rendering some models, it gets around 200FPS compared to my 30-60 FPS. What’s up??? I’m using a MUCH faster CPU coupled with the same or even better video card but get much lower performance. Any ideas?
My friend bought several computers to his company equipped with 800Mhz Amd Thunderbirds and Geforce2MXes, with win2ks. Running 3d test programs from www.MadOnion.com revealed that 2 of the machines were really slow (something like 1600 points compared to expected 3300) , although the only difference between machines was different motherboard. He got the machines back to the shop, who were able to fix the problem. They installed newest drivers. That doubled the speed. So check what points you and your friend score. Maybe it’s a driver problem. Check MadOnion.com, it’s the place for doing 3d benchmarking (at least d3d-based desktops)
Don’t forget that double-buffering and some OpenGL drivers will (sometimes optionally) wait for a vertical refresh before updating, causing your FPS to be capped at your monitor refresh rate.
Today I read an interesting article in www.anandtech.com about the drivers for 98 and 2k. Check it: http://www.anandtech.com/showdoc.html?i=1422
I think it’ll give you exactly the answers you need. Maybe you’ll be really suprised how the Win 2k drivers(especially from ATI) hurt the performance of some games…
It’s not just the card drivers. There’s also
motherboard (chipset drivers). Things like
AGP support (4X?), fast writes or not, and
even whether you set your IDE drives to use
DMA or not may affect how efficiently the
core logic can shuttle data to and from the
I have to agree with mcraighead, there should be only a minimal hit, if at all, in openGL going from win9x to win2k. (It seems some apps are faster in win2k over win9x for Nvidia stuff at least).
I rewrote some of my drawing code, and this really blew me out of the water, in win9x, 210 fps, in win2k 160 fps. Both had v-synch off, both at 800x600x32 @90Hz. I also noticed that ATI plays a cruel trick, they convert the textures from 32 to 16 bit as default, I had to change it to ‘quality’ setting, but instead of 256fps it was showing before, it went to 210. I wonder if Nvidia has this little trick also?
Oh, this was with a Rage 128. I think my next card will be something from Nvidia. Maybe if someone buys the Geforce 3 (NV20), they can give me their old Geforce 2 card?
In general, I would expect that immediate mode apps will be a few percent faster under NT (lower overhead for us to retrieve the TLS pointer to the current GC). This effect should be under 5%.
Any other difference between Win2K and Win9x OpenGL performance, in my opinion, is a bug. Sometimes it may not be our fault (for example, buggy AGP drivers on one OS), but aside from that…
So are fast writes supposed to greatly increase the performance of a card? i.e. 500 percent? Also, is DMA or non-DMA the way to go?
Fast writes are usually irrelevant. They are broken on Intel chipsets, so it’s hard to use them.
I don’t understand the “DMA” vs. “non-DMA” question; specifically, what is meant by “non-DMA”. However, I can say with certainty that DMA is the only way to go for 3D graphics.
What about athlon chipsets. Fastwrites.
For a number of reasons that I can’t go into, no, they really don’t help all that much. The fact that they are broken on Intel chipsets, for a number of reasons, impedes our ability to use them on other chipsets.
(Yes, amazing, for once, Intel screwed it up and Via is the one without bugs; usually it’s the other way around.)
The AGP support in Win2K is really screwed up, at least for Intel chipsets.
You also cannot get more than 32 mb of agp texture memory (aperture 64).
It’s better than NT which doesn’t support agp mem but … kinda bad for a “professional” OS !