Somekind OT: strange frame-times on high tri count

I know that can be not related to “pure” openGL, but …
I have an app for viewing 3d scenes. Everything is ok on “normal” count of triangles (100-200k). But when the scene is getting larger (~400k tris) the time needed for render one frame is very weird: one frame is drawn in ~120ms, one in ~50ms – some kind of toggling. Why they are so different ?
I need to mention that the DirectX(sorry ) version is behaving the same.
If i remove the “optimization” part (sorting by states or by geometry – this part contains only a qsort call) everything is ok – seems that when you optimize too much the card cannot keep up.

Originally posted by dawn:
I know that can be not related to “pure” openGL, but …
I have an app for viewing 3d scenes. Everything is ok on “normal” count of triangles (100-200k). But when the scene is getting larger (~400k tris) the time needed for render one frame is very weird: one frame is drawn in ~120ms, one in ~50ms – some kind of toggling. Why they are so different ?
I need to mention that the DirectX(sorry ) version is behaving the same.
If i remove the “optimization” part (sorting by states or by geometry – this part contains only a qsort call) everything is ok – seems that when you optimize too much the card cannot keep up.

I think the strange timings came from the
qsort call. The speed of the execution
depends on how the data is originally stored
in memory.
This means, if you have two datasets, one
sorted and one unsorted, the duration of both
qsort calls will differ widely.

The times i’ve put there are in the same conditions – the same view with the same objs – nothing different. I don’t think that qsort is behaving different on the same array.