GL_TRIANGLES vs GL_TRIANGLE_FAN/STRIP

In fairness it was a little bit buried.

I’ve no objection to the use of strips in cases where the programmer has done the proper groundwork (including profiling of properly written strip code vs properly written list code) and determined that strips are the best solution for their requirements. Likewise when coding to hardware that prefers strips. My problem is when strips are still being recommended as the best-case general solution; what I’ve been saying here should really be read from that perspective.

My problem is when strips are still being recommended as the best-case general solution; what I’ve been saying here should really be read from that perspective.

I agree with your intent, but that’s not what you said. Your initial response to Ugluk disparaged the use of strips as “fooling around.” And you outright stated that using them would lead to “heavy investment in an overly complex and fragile renderer.”

There’s a difference between the position that “strips shouldn’t be the default case” and “using strips will destroy your engine.” And your words clearly indicated the latter position more than the former.

We’re splitting useless hairs again here, Alfonse.

Bottom line: optimize for the post-T&L vertex cache first. That’s the big fish. Then, if you really bored, knock yourself out and stripify the indices.

And again, there’s this whole ignoring the fact that strips use no less than 50% fewer indices than lists. That’s really important if you’ve got a lot of indices.

“Really important”?

Have you ever proven that you’re bottlenecked on the bandwidth for these tiny extra indices that hit the post T&L vertex cache? Really?

Have you ever heard of anybody that’s bottlenecked on them?

What’s the “really important” gain that we’re talking about here? What were the frame times, before and after stripifying the indices? On what GPU/driver?

I am aware this has become an old thread, but I’ve been away from my program for a long time and just need a clarification on the subject now that I’m finally back at it.

From the discussion that ensued from my question, I take it that GL_TRIANGLE_STRIP isn’t worth it: there’s the risk the hardware is accelerated to run on GL_TRIANGLES and _STRIP would thus be slower and, if not, the speed gain would be marginal since the only benefit would be less calls into the index buffer. Is this correct?

As far as non-indexed TRIANGLE_STRIPS, right. Unless you’ve got some really junky GPU you’re rendering with, you can get more bang for the buck with indexed TRIANGLES.

With non-indexed TRIANGLE_STRIPs, your ATVR (avg transform to vtx ratio) on a 2D mesh can be up around 2. But with indexed TRIANGLEs, you can get it down to closer to 1.

On that subject, I love this blog post:

As he says, optimize your indexed TRIANGLES for maximal vertex cache reuse first! Then if you’re just bored, you can try and save a really tiny bit of index bandwidth by compacting them using indexed TRIANGLE_STRIPs (if your triangle order optimizer lends toward producing contiguous triangles), possibly using primitive restart or degenerate tris to join multiple strips in one batch. But whatever you do, don’t change the vertex order! That saves real perf!

And on the indexed TRIANGLE_STRIPS thing, I don’t think I’ve ever seen anyone produce a test case where they were index bound, such that stripifying the indexed TRIANGLES gets them anything perf wise.

So really, I’d just forget TRIANGLE_STRIPs for TIN meshes unless you’re doing something really special – such as deving on a mobile device where memory is at a super-premium, or you’re doing something special like rendering a regular grid where you can walk the mesh back-and-forth in strips that fit in the vertex cache (where in that case you wouldn’t even use a triangle order optimizer). In that case, maybe you want to consider indexed TRIANGLE_STRIPs.

But non-indexed TRIANGLE_STRIPs? Nuke 'em.