GeForce2 wireframe performance in my engine...

Why should any HW handle lines as quads? That’d be 30 lines per triangle.

I mean, anyone can adjust it’s geforce to be a quadro (AFAIK, it was written down in an article in a local computer magazine). You (I don’t mean you personally) switch off some gates that could draw a line with just one line instead of rasterizing a complete quad out of ?

About line being drawn as a quad, think of what happens when you draw a line with a width >1.0 pixels. You will get a rotated quad, with the same width as the line width, and the same height as the length of the line. Of course, linewidth=1.0 could be a special case though, and be drawn with a standard line drawing algorithm (or whatever ).

Bob, I would think like you: if the width is 1.0, use a good old Bresenham algorithm and it’s done… But then, Michael makes a point: it should not be slower than filling a polygon…

Actually, I do not know anything on how current 3D cards do all this stuff… I know how I did it in the good Atari/Amiga/386 old days but I suppose it has changed…

Any resource on the web about that ?

Regards.

Eric

Thats the point. If linewidth is set to greater than 1, then drawing a quad for each line could be a way to go, specially when the card is very fast.

About the geforce2, it simply is that way (it’s a gamming card, and as such, it has done a tradeoff here about wireframe, which is little used in games). So I did my own wireframe routine with GL_LINES, and it works much faster (2x to 5x) than the polygonmode GL_LINE with strips, triangles and even display lists. It works like this (with backface culling):
glbegin(gl_lines);
-for each face
-calc dot product of the camera dir vector (facevertex1-campos) against the face normal
-if the dot product is greater than 0 then
-glvertex(facevertex1);
-glvertex(facevertex2);

-glvertex(facevertex2);
-glvertex(facevertex3);

-glvertex(facevertex3);
-glvertex(facevertex1);

-endif
-next
glEnd();

of coures, then comes optimization, strips, etc…

Thanks to all !

coco, as I think your last post somehow closes the thread, may I ask you: where were you all this time??? I remember seeing a lot of your posts in the “old forum” but I had not seen anything from you for a long long time !

Regards.

Eric

P.S.: when I switch between wireframe, hidden lines and solid, I always use glPolygonMode… I did never notice such a drop in performance…

Hey Eric, you haven’t seen me for a long time either… Well, just kidding, gonna try out what coco proposed.
Thanks coco

Hey someone remembers me? well, yes, I’m into ogl again. I really enjoy trading info here and helping people when I can. And yes, this pretty much closes the thread, since I think that the geforce2 simply has this drawback (I guess engineers at nvidia did some tradeoffs, yet they got the best product out i think).

Well, I’m going to work now, enough ogl forums for now.

[This message has been edited by coco (edited 01-08-2001).]

Originally posted by Michael Steinberg:
Hey Eric, you haven’t seen me for a long time either…

Yep, but you started posting again before coco !

Eric

A little non-profit advertising: you geforce1/2 owners may want to use a little utility I wrote. Check out http://www.guru3d.com/geforceaa/

bye.

[This message has been edited by coco (edited 01-08-2001).]

[This message has been edited by coco (edited 01-08-2001).]

First, no, a plain old Bresenham line algorithm is not sufficient for OpenGL. Check out the OpenGL spec – it actually requires a diamond-exit rule line rasterizer. Bresenham isn’t good enough because it doesn’t allow for subpixel accuracy.

Then there’s all the stuff about wide lines and the fact that lines require fragment operations, and it should be pretty clear that you cannot use the same hardware that GDI uses for lines.

Converting a line into a quad is indeed one way of drawing a line. The reason you’d do this is that most consumer 3D HW is designed to do ONE thing and ONE thing only: DRAW TRIANGLES FAST.

Now, that’s not the only way to draw lines, but I am merely illustrating the idea.

When you consider the amount of work required to convert a line into a quad, though, it’s substantial. After going to window space and clipping, you need to compute the major axis, offset the endpoints appropriately, and draw a quad with 4 vertices rather than a line with 2. Then you end up with 2 triangles per line, and so triangle setup work is increased.

You could do this either in SW or in HW, but again, either way, it’s a lot of computation.

Given this, it should be quite clear that unless the HW has a native OpenGL-compliant line rasterizer, which is actually a non-trivial feature, lines can be anywhere from a lot faster (fill rate) to a lot slower (line and triangle setup) than triangles.

Please don’t repeat the lines-are-simpler thing – as I said before, “you are assigning preconceived notions to the performance of a whole lot of things.”

If only to make things worse, there are a number of complicating factors in the situation described here. I can’t describe these complicating factors because they involve sensitive information.

And lastly, in the case of a TNT, for something that old, I don’t think there should be any surprise that lines can be quite a bit slower than triangles.

I’m not going to post to this thread again. Suffice it to say: lines can be slower than triangles, and PolygonMode LINE is generally a bad way to draw things. This is not unique to NVIDIA hardware by a long shot. Deal with it.

  • Matt

I must agree with matt in that lines are a non-trivial issue when you have to be opengl compliant. Still, this polygonmode GL_LINE does work a lot faster in other implementations, at least when using linewidth 1, and this is a fact I’m 100% sure about it. Anyway, I can live happy with it, since #1 I have found other methods and #2 I understand the geforce2 is a gamming card and as such it makes tradeoffs in order to satisfy the gamming industry. So hope this guy matt doesn’t take it too personal.

Originally posted by mcraighead:
Check out the OpenGL spec

I am going to… I suppose I need a lot of free time to do that !

Originally posted by mcraighead:
I’m not going to post to this thread again. Suffice it to say: lines can be slower than triangles, and PolygonMode LINE is generally a bad way to draw things. This is not unique to NVIDIA hardware by a long shot. Deal with it.

You sound quite upset… We were just looking for more information, you know… And nobody said that only nVidia’s hardware suffers from this problem…

Thanks for the explanation anyway…

Regards.

Eric

Eric: I think I did say it only happens to the geforce2 among the cards I have tested (about 5 gamming and 4 pro cards), and matt and I had a little email dialog that I think he could have taken personal.
I’m sorry if someone got upset, but I believe in what I have said. As always, I could be wrong.
Anyway, lets just keep going… this is getting really boring.

I can say same thing happens on ATI RadeON.
Wireframe rendering is much slower than filled textured polygons rendering.
But on the ati, rendering don’t slow down if you turn line antialiasing on :
glEnable(GL_LINE_SMOOTH);
glHint(GL_LINE_SMOOTH_HINT,GL_NICEST);
That’s a good point…

Matt, if you ever happen to read here again, I want to say sorry. I mean, my postings sounded insulting, and I’m not in position to insult anyone here since I’m just a hobby programmer still going to school. I hope you’ve not taken anything personal, after all I’m VERY happy that we have a more than qualified person to talk to.

Oh, and a bresenham is firstly not exact enough and secondly you don’t get any depth information for the fragments. You also can’t interpolate colors between the old vertices, and you can’t texture anything I think. Doing a bresenham would then be unexact, and very unflexible. Well, might be wrong.