Hey everyone, i am rendering 62500 triangles on screen, using indexed triangle strips, and I get 9 fps. I have a p4 1.6ghz, 256 mb SDRAM, with GFORCE3, 64mb. Out of 256mb of ram, 167 were used for other task.
Is this good performance, or bad?I also transform is once, totate it 2 times, and transform it again… if this matters.
Thanq.
Regards,
Vasko
EDIT: I dont use any LOD, or quad trees, ocTrees, or any means of optimisation… besides using indexed triangle strips…
EDIT2: I render it in wireframe mode, it gives me 9 fps, if i render it in fill mode, it gives me 21 fps.
well, yes it seems to be, but since i dont know what openGL does to render wireframe, i cant really tell But im interested in this question as well. If i render using gl_point mode, then i get 11 fps. This seems to be rather sad to me, but there are always methods for optimisation, no?
I mean mabe wireframe, it might be a lot, due to building full triangles and stuff, fine, but POINTS?!
I am not familier with ATI, but with NVIDIA which you have, you generally only get hardware accelerated lines on the Quadro line of boards, which are targeted at workstations (and cost $$).
There used to be hacks you could do to get HW lines on GeForce boards, don’t know if this is still possible.
Since you’re sending the same amount of geometry in both wireframe and polygon mode, the bottleneck is not at the vertex unit level.
So it should have something to do with the fill rate. Is it possible that, because you can see through the holes of wireframes, that early Z-culling nearly always fails?
well, i tried rendering without occlusion, and i was including backfaces, and it gave the same rate during the fill mode in both cases (with and without backfaces). For some reason i am trying to change the fill mode to wireframe mode to check if the backface culling makes any difference, but i messed it up and have a bug, so i cant change/test it.
What i am still interested in, is wether the result i got in the first post good enough to continue?
Regards,
Vasko
EDIT: Fixed the bug (spelling mistake, compiler doesnt notice the difference between GL_LINE and GL_LINES. Correct one is without ‘S’)
The backface occlusion makes absolutely NO difference.
Originally posted by gdewan: There used to be hacks you could do to get HW lines on GeForce boards, don’t know if this is still possible.
hm, back on my gf3 i was using linestrips for a rough wireframe because changing the polymode to lines was pretty slow. ati is handling that well, but on nvidia it seems to be a lot slower than using lines (which didnt seem to have that huge of an impact)
By setting the linewidth to a value greater than 1, the lines are stretched along the Y-axis or X-axis, depending on the slope of the line (providing you didn’t set the line smooth hint). The results are the same as if quads were used to draw the lines.
Thank you, this will help me a lot. But can still someone answer me if my rendering performance is efficient?
Thank you a lot, i learnd so much from this post.
Regards,
Vasko
EDIT!!!: There was a mistake, I was getting 21 fps while not rendering 62500 triangles, but basically 2 times the amount - 124002. I was dumb enough not to do some simple math right.
Sorry about that!