I got a strange problem I can’t seem to find the cause of it. I’m rendering static geometry using only display lists that I compile only once of course. But then I get around 20 fps at most on a Mac while I get 60+ fps on a PC with nearly the same configuration.
I got a G5 with a Radeon 9800 Pro so this puzzles me a lot. Here’s what I turned off:
- State changes
- Depth test
- Depth write
- Alpha test
basically…there is nothing left and I still get very bad performances. The CPU doesn’t seem to be the bottleneck either as if I don’t make any glCallList calls, the fps jump up a big step to 80 fps or more.
I even made sure the render surface was hardware accelerated by querying it using glDescribePixelFormat and I even set the rendered ID manually to the one shown in OpenGL Info for the hardware accelerated ATI driver.
By the way I’m running OS X Panther 10.3.2
Does anyone have a clue what could be the problem?
I forgot to mention:
The display lists only contain float elements (no unsigned int color or stuff like that). I read that this was a problem previously (or was it?).
Anyway, it seems it doesn’t change anything if I use only floats or not.
What’s in your display list?
If you have more than 65536 vertices in a single triangle strip, it’ll be slow…
Other than that, the overhead for glCallList(s) is quite high, so it’s not something you want to do frequently.
For the records I finally found the problem.
When creating my display list I did not declare my vertices properly in the glBegin and glEnd.
For example, I was doing:
but the glVertex3f needs to be put after glColor and glTexCoord or the driver won’t be able to “optimize” it because the vertex will be ill-formed.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.