Minification? Tesselation?

I’m using the depth buffer and I’m culling acording to the view. An average view from my camera in a simple world doesn’t exceed 1000 triangles. When I move the viewpoint very close to an object, in this case, a simple airplane model I made, the frame rate starts to drag. Backing away from the object ( and actually making more tris visible) solves the frame rate drag. The Big Red Book didn’t really give me the answer I expected. Can anyone make any suggestions? Thanks!

You seem to be fill-rate limited.
Try to keep the camera at the same place and change the size of the window (ie 10x10 <-> 1024x768) and see what happens.
If that is the case, try to simplifiy shading, texturing (bilinear instead of trilinear or aniso), etc.

Any details about your hardware config ? Do you use the software OpenGL ?

Current GPUs can process over 350 million vertices per second, so the bottleneck is not at the vertex level but at the pixel level.

The bottleneck is situated at the pixel level. As the polygons take up more screenspace, more pixels need to be drawn, putting more strain on the rasterizer and fragment processor.

Hmmmm… I have a Pentium 4 1.8 ghz cpu and a Radeon 9600 Pro. It seems like this combination should be able to handle the seemingly meager drawing requests I’m currently making. I’m sure I’m doing something else incorrectly. I’m using display lists and texture objects. I’m currently NOT “stripifying” my polygons but using GL_TRIANGLES in my lists every time a texture changes. Any there any other obvious speedups that I can implement to get better performance?

>>Any there any other obvious speedups that I can implement to get better performance?

Maybe use recent drivers ?
Run Glinfo and see if you have “Vendor :Microsoft Corporation”, in that case you only have software acceleration.

And as we said (perfect double post!) you problem is NOT the geometry. Did you try to resize the window ?

I’m curious about your “fill rate limited” comment. Can you elaborate?

one limitation on modern cards is the fill rate, that is the maximum number of pixels it can possible rasterize. The simple way to check is to resize the window. If smaller window->better fps you are fill rate limited. The leading cause of fill rate problems in simple programs is overdraw, that is rendering the same pixels unnecessarily. There are several things you can do to improve this. The simplest, if your data is properly wound, is to enable culling ie glEnable(GL_CULL_FACE) assuming your vertices are specified in a counterclockwise order, this will hide backfacing polygons and yield a good speedup. More complex methods are also possible, but involve organizing your data in some sort of tree and either doing some sort of occlusion culling or rendering from front to back in order to take advantage of the depth buffer.

‘Fill rate limited’ means that what limits your performances is the filling of the triangles. That is, drawing pixels.

As opposed to ‘transform limited’ (when transforming and lighting the vertices that are stored on the card) or geometry transfert limited (when you send each frame a lot of vertices).

Please, do benchmark different windows sizes and report you GL vendor string.