GeForce2 wireframe performance in my engine...

My only complaint with the GeForce2 is wireframe performance. My engine is irracionally slow in wireframe mode compared to point and solid modes. I have tryed turning on and off different states, used Arrays, Triangles, Strips, Display Lists, and all are unusable in wireframe mode. I think my app makes the drivers fall to software mode when in wireframe mode (only explanation). My engine is now 2 years old, and have been optimized and debugged a lot, and works great in other cards (Intergraph, Mitsubishi, TNT, SGI Visual Workstations, etc…), so I think it could be a particularity of the GeForce implementation of OpenGL, but I wont assume a driver problem, because 99.9% of the time our bugs are the cause of this type of weird things.

Anyone knows the reason? Anyone has the same problem?

Aside from this weird thing, my GeForce2 simply rules, beatting all other cards at work (including $2000+ cards like the Intergraph Intense 3D 3410 AGP with geometry acceleration).

NVidia could intentionally slow down rendering in wireframe mode to prevent people from using GeForce for AutoCad rendering and force them into buying more expensive Quadro Pro cards. Quadro Pro reportedly is much faster than GeForce in drawing lines. It’s just a guess, but I don’t see any other reason for it to be slower than “normal” mode.

Almost forgot, GL_LINES works really fast to draw objects in wireframe mode, while GL_TRIANGLES, GL_TRIANGLE_STRIP, vertex arrays, etc, dont. I draw wireframe objects without lighting,texturing or normals, and coloring each object with glcolor (pretty simple, maybe TOO simple for the geforce?).

Well, I thought of that one (slowing wireframe to favor the quadro1/2). It would be a very MS-style strategy. Lets hope things dont start getting that hairy now that 3dfx is gone.

Anyway, since I’m NOT buying a Quadro1/2, I would like to find a workaround to the problem. Right know, using GL_LINES to draw the three edges of each triangle (pretty ineficient) is the workaround I found, and is a whole lot faster than the logical, eficient way (wireframe GL_TRIANGLES).

Hope to hear from somebody… (nvidia?)

coco, instead of GL_LINES (passing 6 vertices to draw a triangle, 2 for each line segment), why not use GL_LINE_LOOP (passing 3 vertices to draw a triangle, OpenGL will connect 3 and 1). It will minimize your API call overhead, and should boost the performance further.

Thanks, should be even faster. I was thinking of a preprocessing pass to find out the edges, and draw them only once (with GL_LINE_LOOP I’m still drawing 2 times each shared edge).

Anyway, I dont see making a special case for the geforce as a solution in such a simple matter (using GL_LINES or GL_LINE_LOOP instead of GL_TRIANGLES or GL_TRIANGLE_STRIP). I hope to find out the reason for such a weird behavior, so we can work out a solution.

It makes a lot of sense that GL_LINES beats PolygonMode GL_LINE. After all, PolygonMode GL_LINE will draw each line twice in a mesh.

PolygonMode is, in general, one of the not-so-good features in OpenGL.

  • Matt

Well, I understand, yet I would expect GL_LINE_LOOP has at most the same performance than GL_TRIANGLES with polygonmode GL_LINE (both send 3 vertices for a triangle, and both draw edges twice when using GL_LINE_LOOP the dumb way, drawing edges twice). GL_LINES primitive should in fact be slower (again, in dumb mode…), since it involves sending 6 vertices per triangle. Worst, GL_TRIANGLE_STRIP should be faster than both GL_LINES and GL_LINE_LOOP, when drawing edges twice, since it requires much less vertices.
What happens in my GeForce2 is:
-GL_LINES drawing edges twice is really fast!
-GL_TRIANGLES and GL_TRIANGLE_STRIP in GL_LINE polygon mode are slow to a point where SGI’s software opengl is as fast as the geforce in small windows (to reduce the fillrate factor that hogs software ogl).
-I have tried display lists also, and the dont work.
-solid triangles and point triangles work as they should on the GeForce2 (damn fast).
-I believe this same behavior happens with the nvidia Groove demo and the Lighting demo, when rendering in wireframe mode (so my app may not be the problem). I feel 3D Studio MAX does the same too.

Again, this behavior is unprecedented in my engine: I have run it succesfully on the following cards without this wireframe issue: Nvidia TNT, Riva 128, FireGL 4000, Intergraph Intense3D 3410TG AGP, ATI Rage Pro, SGI Visual Workstation, FireGL 1000 Pro, MS Software Ogl, SGI Software Ogl, Mesa3D Software… Gonna try in Linux to see if it does the same.

As you may have guessed by the cards, it’s not a Game engine, its a more generic engine, where wireframe is really important.

All right…

Do you have a simple test app that I can try that illustrates the problem?

If you can’t post it here, you can email it to me at the address in my profile.

  • Matt

After all that window system gets pixel exact things done now.
Matt, I still don’t understand the argument that if you draw strips you’ll have the lines drawn twice. That shouldn’t be an argument. You’re rendering far less fragments as I understand, have no texture memory access (at least I don’t have it). Why should textured lit and whatever polys be faster to render than a simple line drawn twice?

You (all of you, in fact) are assigning preconceived notions to the performance of a whole lot of things: triangle perf, line perf, vertex perf, fill rate perf, etc.

Here’s a simple example. Imagine that the HW handles lines by drawing them as quads, i.e., two triangles.

Then PolygonMode GL_LINE transforms a single triangle into 3 lines, and in turn those 3 lines become 6 triangles.

So 6x the triangle setup work is required.

The facts of the matter are as follows:

  • A lot of HW is slower at drawing lines than triangles. Some HW is faster at lines.

  • Polygon mode is often slow. GF supports it in HW, but some folks don’t.

  • Polygon mode with different modes for front and back is just a bad idea.

  • Polygon mode GL_LINE is bad for wireframe because it draws each line twice.

  • Polygon mode GL_POINT is bad for point rendering because it draws each point MANY times.

  • Drawing a bunch of GL_LINE_STRIP primitives for wireframe is probably the most efficient way to do wireframes.

  • Drawing a bunch of GL_POINTS is by far the most efficient way to do point rendering.

  • GeForce is for games, Quadro is for professionals. Lines are a professional workstation feature.

  • Matt

> Lines are a professional workstation feature.

So that’s why EverQuest slows down on
a GF2 when it begins to rain. EQ is one of
those “professional workstation application
suites” that use line drawing. I’ll use that
the next time someone walks into my office
at a bad time.

:slight_smile:

When I saw EQ for the first time, it took me about 10 seconds to decide that it was one of the worst 3D engines I had ever seen. I’ll start with “no mipmaps”, “no 32-bit color support”, and “disgustingly bad terrain lighting”. Don’t use EQ as an example, or I’ll have to sack you. (My [least] favorite are the claims on the web I’ve seen that “FSAA helps texture quality”, with EQ given as an example. Yes, supersampling increases texture quality, but the reason it helps so much for EQ is that EQ isn’t using mipmaps! DUH!)

I’m not saying that games don’t use lines at all. For example, Homeworld uses AA lines for cloaked ships and for when the viewpoint goes inside a ship. In fact, it uses AA lines with polygon mode GL_LINE.

But if you have a 100K poly model and want to animate its wireframe at 30 fps, that’s a whole different league.

If you want to compare cards at their line and AA line performance, I’d suggest starting with Viewperf. A number of Viewperf tests use lines, and ProCDRS tests 1 and 2 use AA lines. (In fact, Viewperf is actually a fairly good geometry performance benchmark in general.)

Note that Viewperf does not use polygon mode.

  • Matt

Thanks to Matt- It looks like nvidia is giving excelent support here at the OpenGL forums. Lets hope ATi and others follow.

I get your point: one cannot assume how HW implements primitives. Maybe if the GeForce2 driver implemented polygonmode GL_LINE using what it uses for GL_LINES or GL_LINE_STRIP then it would go fast then.

Well, I’ll give this case as closed, with the following conclusion: Yes, geforce stalls when using polygonmode GL_LINE, so use a different aproach, such as GL_LINES primitive.

I’m not convinced that GL_LINE should be “that” slow. I could see it being a few times slower than GL_FILL.

  • Matt

Matt:
Well, at least in my engine GL_LINE it’s irracionally slow (at least compared to a bunch of other cards, including TNTs). Maybe that’s the way things are in this particular case… Anyway, I will implement both solutions, with polymode GL_LINE and GL_LINES, and which ever works faster will be used automatically by my engine depending on the machine. Currently, GL_LINES is like 4x to 5x faster in 50k-triangle+ scenes in my GeForce2/p3 600 config.

Also, The same happens in both Win98 and Linux.

Did you check the app I sent you? hope you received. If so, please give me your impressions about it. If not please contact me so we can figure out how I can send it again.

Thanks.

I already replied to your email. Let’s discuss it by email.

  • Matt

Matt: I didn’t want to harm you, and I think you got me wrong. I didn’t mean GL_LINES or whatever would be too slow, I meant that it is slower than drawing the filled and lit model, and I’m speaking of the same model!

I have a great array with the N3V3 format or however it is called. I simply draw that stuff with a call to DrawArrays or however that is called. If I’m drawing it with glMode to GL_LINES, it is more than by a half slower. That is what confuses me. I mean, these are unantialised lines, maybe drawn twice, but in my mind they should easily outperform filled polys.

PS: I still got a TNT if that matters

Ooops, didn’t see one mail of you. You stated that you could think of it being a few times slower. Sorry.

Michael, I don’t want to speak for Matt but, he said in one of his previous posts:

“Imagine that the HW handles lines by drawing them as quads”

I have already read that somewhere… I cannot understand why they would do that but, if they do, it can explain why using lines would be slower than polygons…

OK, you have less pixels to set but you have a lot of things going on before actually drawing pixels…

Regards.

Eric