overlapping geometry causing severe slowdown?

Is this possible? I have loaded a junked model which has tons of overlapping geometry (faces), and the framerate was about 10fps or even less! can overlapping geometry cause this? maybe if they share lots of vertexes or something? there should be an explaination!

how many polygons ??

This suggest the issue is of “Depth Complexity”, how many times per frame do you have to fill the same pixel, the higher the depth complexity generally results in less performance and lower frame rates

You need to look at methods of occlusion culling etc to reduce high “Depth Complexity”

A Google on “Depth Complexity” should shed more light on the issue

oh but even if I have 5000 vertices and I render them using gl_points its very slow! what could be causing this?

also whats the difference between GL_POLYGON and GL_TRIANGLE ? i’ve seen a c++ example rendering a mesh by just calling glvertex3f ONCE! not 3 times like I thought, how? I tryed this and I got a weird looking mesh! even if I used GL_POLYGON instead of triangle! whats with c++ !!??

First you should verify that you are using an accelerated pixelformat. If you are forced into software rendering your are extremly fillrate bound.

Use glGetString(GL_VENDOR); if it says “Microsoft …” or “MESA …” you are in software rendering mode.

Other then that, there is tons of stuff one can do wrong, reloading the textures/meshes from the harddisc on a per frame basis for instance.

Without saying what exactly you are doing, we can only guess what is going wrong.

GL_TRIANGLE draws a simple triangle with 3 vertices

GL_POLYGON draws a polygon with an arbitrary number of vertices. the polygon is internally divided into triangles by the driver. the polygon has to be convex, otherwise it may look strange.

if you use GL_TRIANGLE_STRIP, you have to define 3 vertices for the first triangle and only one for each following triangle, which is drawn using the last three vertices. if you define vertices v1, v2, v3, v4, v5, three triangles are drawn: t1=(v1,v2,v3), t2=(v2,v3,v4), t3=(v3,v4,v5)

concerning the performance problem- maybe you should not draw so many points. i heard that most graphic cards do not support hardware-accelerated drawing of points and lines.

Hi, thanks to both of you!

I’ll check the glGetString call and tell you what its retrieving soon.

No I’m not loading anything from the harddrive at the loop, just when it needs to get loaded (only once). I load the mesh, I parse its data into structures and later in the loop I retrive that data and generate the mesh.

I was using GL_POINTS just to make sure the vertexes where loaded correctly (and they were). But when rendering with triangles, it was as slow as with points. Thats what I dont understand, why?!.

I have an FX5500 I guess it should support enough polygons and vertexes, no? it’s not top-notch but it can support quite a lot more data than what Im currently rendering!!.

:confused: What could be wrong…
I can play any game based on opengl with out problems, and drivers are okey, the latest ones from nvidia.

Its undoubtly a problem from my engine I guess, but I just call glvertex3f … why is it slowing down so much? (and yes, thats the part where it slows down)… Maybe as you said, its rendering in software mode by some reason?, I’ll check soon and let you know.

Thanks again, but still no luck fixing this problem :frowning:

do you use display lists?

Im sorry I know a display list lets you store in the gfx cards memory some commands for rapid access (its like 3 times faster on some cases and even more for what I’ve read) but, I dont know how to use DLs! and im not in C++ also if this helps you.

Thanks again!

Forgot to ask… but arent Display Lists for recursive tasks? will they help me on this case? (Well it IS recursive as I always call the same all the time… so I guess it should help)

display lists are very easy to use. assuming that you have a display function in which you draw some quads like this:

 
void display() {
 
 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glOrtho2D(-10., 10., -10., 10., 1., 1000);

 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();
 gluLookAt(0., 0., 100., 0., 0., 0., 0., 1., 0.);

 for(int i = 0; i < num_quads; i ++)
   Quad[i].Draw();

 SwapBuffers(...);} 

if you want to do this with a display list, you need to initialize the list ONCE right after you’ve made you GL context current:

 void InitList() {
 glNewList(1, GL_COMPILE);

for(int i = 0; i < num_quads; i ++)
   Quad[i].Draw(); 

glEndList(); }

and you have to change the display func like this:

 
void display() {
 
 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glOrtho2D(-10., 10., -10., 10., 1., 1000);

 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();
 gluLookAt(0., 0., 100., 0., 0., 0., 0., 1., 0.);

 glCallList(1);

 SwapBuffers(...);} 

Yes but… will this help me?
My current problem is…

I read all the data from an .obj file, store this data in structures!.

Later I call the glvertex3f 3 times per triangle on the loop, this is a lot of calls ! so maybe what you suggested might help me. but I dont see it very practical. :frowning:

I thought I could get more than 2fps with just about 4000 polygons on screen…

Im calling glGetString with the name value: GL_VENDOR but I get a null!, yes im calling it via the opengl dll and its giving me 0.

jup, it will help you…try it :slight_smile:

it doesn’t matter how you store your data- a display list speeds up your app (not if you only draw a few trias, but for a thousand trias you should see the effect).

of course it only makes sense if you do not change your model data during program execution (or at least not in each frame you draw), because in that case you have to create the display list again.

by the way- what hardware are you using?

Athlon XP 2600+ 1gb of ddr ram (400mhz)
And the fx 5500 256mb -

Its not my hardware fault… I can render way more than 4000 triangles! it’s my programs fault im sure but Im not loading anything at the loop as someone suggested, and theres no other thing I could think of… Maybe im missing something important for speed??

Shouldnt I still get 100fps or more with just 4000 triangles?!!

maybe it’s time to post some code…

but why you guys think everyone works with c++ ? I certainly not, im using a basic language, and why do you need code? if I already tell what im exactly doing. :S

I already tell what im exactly doing
maybe you’re doing it not exactly…

I will enumarate everything in my test source, lets see:

I init the winapi window, with context, etc.
That works ok, as I already have cubes and simple stuff working.

I set the cls color
I enable GL_DEPTH_TEST

I setup a light
I enable the light (gl_light0)
I enable GL_CULL_FACE)
I enable GL_NORMALIZE

I set matrix mode to GL_PROJECTION
I call glLoadIdentity
I set the perspective with gluPerspective
I set matrix mode to GL_MODELVIEW
I call glLoadIdentity

Then I have my source for loading obj

and I start the loop:

I clear color and depth buffer
I call glLoadIdentity
I call gluLookAt

Then I begin with GL_TRIANGLES
I reset my list and I do a for loop to nextelement

in the for loop I have 3 calls to glVertex3f
next

then glEnd

I got some keyboard checking…

Then I swapbuffers
and close loop

then my END function which just closes the libraries and frees some stuff, later it closes the proggy.

Thats all Im doing, find anything wrong?
Thanks in advice

Which Basic dialect are you using? Because I have the bad feeling that it is just interpreted Basic and hence real slow.

I just slapped together a really simple test program (in C++) that creates >50.000 verticies on the fly and renders them as GL_POINTS. With no special optimizations it already runs at >100 FPS on my GeforceFX5900.