Strange Problem With Rendering Speed...

Hi all!

I have a strange problem…
I have 2 computers,the first one is an athlon 700 Mhz with a TNT2 card and the seconde one is a 1000 Mhz with a Geforce 2 GTS… I am working on a program with about 16000 triangles to draw. There are about 2000 triangles per display list…
The problem is that the frame rate is higher on the low-speed computer (700+TNT2),it’s near 76 FPS. The frame rate on the high-speed computer is about 25 FPS.
My two PCs have the latest drivers and the high-speed computer works very fast with other OpenGL programs/games.
I tried several stuffs such as removing the lighting,the texturing,the back face culling and the COLOR_MATERIAL,but it changed nothing…
Just by removing the glNormal3f functions,the frame rate went near 40 FPS.
I am sure that the problem comes from something in the OpenGL part of the prog because I made a test program without DirectInput,DirectSound and other stuffs like that…

So do you have an idea?
Do some functions work faster with some Graphics Card?

Does someone want to see my test program?

Hm… if you would paste some code that meight help. How ever… what I’m guessing is that there are things like Anisotropic filtering or Antialiasing enabled by default on your drivers… you should check the config… that meight raise a big difference… may also be that there is something configured wrong with your AGP port or in your bios… very hard to say. Ever tried to put the GeForce into the PC with the TNT2 and check the framerate then? As I know displaylists are possibly stored in AGP or video memory. On my old computer my AGP memory was far slower than system memory… some weird drivers and so… so if the driver meight think it does something good through storing them in AGP… this meight raise this awful FPS as well.


What is you gl initialization code? Do you enable and disable almost every state like fog and lighting (if not used) or just the ones you use? For some reason opengl doesn’t seem to have a standard for the way states are enabled when you create your context or maybe card vendors just ignore it, I don’t know. A good rule of thumb is: if you didn’t set it DO NOT assume it’s off. Even if it works on every card you could test on someone will be running some ancient card that will throw you a curve ball.

hope this helped…


Is FSAA turned on on the faster computer? That could explain a low framerate on there.


The FSAA is off on the fastest computer yes…
Perhaps that if I use glDrawElements instead of Display Lists,it will solve the problem… What is your opinion?
I am going ot try that…

Here is my OpenGL Initialization code:
(Comments have been removed…)

GetClientRect(hWnd, &wndRect);
ghDC = GetDC(hWnd);

if (!SetupGLPixelFormat(ghDC))
PostQuitMessage (0);

ghRC = wglCreateContext(ghDC);
wglMakeCurrent(ghDC, ghRC);

GLfloat LightAmbient[] = { 0.5f, 0.5f, 0.5f, 1.0f};
GLfloat LightDiffuse[] = { 0.8f, 0.8f, 0.8f, 1.0f};
glLightfv(GL_LIGHT0, GL_AMBIENT, LightAmbient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, LightDiffuse);


glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

ya I noticed you don’t have any glDisable() in that code. you really need to disable every state in opengl that you are not using. who knows what is lurking in the background of you app…


Originally posted by john_at_kbs_is:
[b]ya I noticed you don’t have any glDisable() in that code. you really need to disable every state in opengl that you are not using. who knows what is lurking in the background of you app…

Sorry, but that’s bs. There’s a default state for everything in OpenGL. And NVIDIA’s drivers seem to obey the spec quite strictly.

I’m sorry I didnt know NVIDIA’s card were the only cards on the market. besides you never trust someone else to take care of you code for you, that’s called bad programming…

Originally posted by john_at_kbs_is:
I’m sorry I didnt know NVIDIA’s card were the only cards on the market. besides you never trust someone else to take care of you code for you, that’s called bad programming…
ndj55 said both machines have NVIDIA cards. A TNT2 and a Geforce2.

Of course they’re not the only ones on the market. Actually I have a Radeon8500 im my primary box.

Take it easy dudes
So nobody have any opinion about transforming Diplay Lists into Arrays?
I don’t have a lot of time at the moment but perhaps I could give you an URL to get the prog and test it on your computer?

It looks like you are asking for the nicest polygon smooth. I think the default state for polygon smoothing is OFF but you may have enabled it somewhere else.

Now, cards up to the TNT2 used to ignore the GL_NICEST hint for polygon smoothing: they basically didn’t perform ANY polygon smoothing (Matt, correct me if I am wrong). But when the GeForce came out, a lot of people (including me) found that their app was slowing down dramatically. That was because the GeForce chips actually DO polygon smoothing and this is really slow.

Try disabling polygon smoothing and everything should be fine.



it changed nothing…
The frame rate is still around 24 FPS…
It’s really crazy… About 76 FPS on a TNT 2 and 24 on a geforce 2 GTS…

Can you send a test app that I could run here (Dual Athlon MP 2000+ with GF4Ti4200) ?

It’s very difficult to help you without seeing the running app (i.e. all we can do is give you the “usual” advice…).



Get all right here:

There are four CPP files:
main.cpp = main file
3DSLoader.cpp = My 3ds loader
GLFunctions.cpp = Texture Loader
GLWindow.cpp = The Class Of The Window

Init of OpenGL is in GLWindow,display lit generations is in 3DSLoader…

Please,no bad jokes about my coding style…

OK, I have it here: I’ll have a look at the code to see if I can find anything suspicious.

Is it possible to get the “.3ds” files you use as well or are they confidential ?

I could use any 3ds file I have but then we won’t be able to compare the frame rates.



Actually,the 3ds files are in the zip…
There are 8 3ds files…
My results with this test program is:
On the high-speed computer,about 24 FPS.
Ont the low-speed computer,about 76 FPS.

I can give you another 3ds files but the ones in the zip are perfect for the test…

I got from 200 to 225 fps on my sys.

P3 600MHz
GeForce 4 Ti 4400 w/ Det. 29.42
256MB PC133 RAM
Windows XP Pro


Also i’m running with Quincunx AA turned on and with Anisotropic on at 8x (if that matters in this demo at all).

BTW, I like how you’re doing your main menu (even if it doesn’t let me select anything ); looks pretty cool.


thx SirKnight!
The menu works in the real program but I removed the DirectInput layer for this test prog… There aresome sound too (ripped from tony hawk 3) but removed too…

getting 192 FPS

P3 450
Gf2 GTS 32 MB latest drivers

I might add that your menues are spinning too fast and your coding style is easy to follow.