I have been working with OpenGL off and on for a couple months now. I wrote a small and incomplete engine in september and early october. I then started working a new job so that got put on hold. Now, I have started on a revised (read: rewritten) engine and I have come across a very odd problem that has me tearing my hair out by the roots.
My first engine can render some 5000-6000 smooth shaded, textured tris, some with alpha blending and what not (ok, I know, thats not all that great!) at 50-60fps.
My second… umm, well its not even an engine yet… Its actually just a test program that I was gonna use to see how many polys I could shove through my card without any other hinderences. It binds a texture once per frame, then loops 100 times drawing a triangle over it self in the center of the screen. With this I get 25-27 fps. If I make a second outer loop that loops 25 times to draw 100 triangles, the frame rate drops to less than 2fps.
I am using similar methods of drawing polys in both cases, I am using the same initialization code in both cases (an only slightly modified version of NeHe’s). The first case is as complex as the second is simple. The only thing I can think of is that in the second case, I’m drawing the test polys over each other every time. Is this bad?
I greatly apologize for the length of this message, but I’m at wits end here.
Oh, and I have checked the renderer as provided by glGetString( GL_RENDERER ) and in each case its “SAVAGE 2000” (IIRC), I have a Diamond Viper II.