Do not deprecate GL_QUADS please

Unless it is actually done using a geometry shader generated by the driver. Of course, it won’t be any slower than if you would write your own geometry shader to do the same thing, however using GL_QUADS or GL_POLYGON creates the misimpression that the rendering is as fast as with triangles (having quads and polygons in the list of “native” primitives), however it is not the case.

Unless it is actually done using a geometry shader generated by the driver.

Do you have any reason to believe this is the case? Triangle strips are done via simple cache tricks: it just accesses the last two entries in the post-T&L cache and flips the winding order when it gets a new vertex.

While GL_POLYGON probably isn’t hardware driven, I see no reason why GL_QUADS wouldn’t be. Especially when GL_QUADS are just a minor modification of strips (essentially, 4-element strips with a primitive restart between them). It would be trivial to implement them.

His point is that the same hardware that makes QUADS work makes strips and fans work. So if you’re making a hardware-based argument against QUADS, then this argument would necessarily need to be made against strips and fans, since they use the exact same hardware.

My 2c:
I think this whole discussion is completely pointless unless someone from the ARB tells us, why the ARB decided the way it did.

My take on this is: it has to do with interpolation and clipping. Polygons and quads are the only primitives that can’t have easily interpolated attributes and can’t be easily clipped.

Even though the driver (or maybe even the hardware) can divide quads into triangles transparently to the API user, this abstraction leaks as soon as the API user expects vertex atributes to be interpolated ‘the quad way’. So, this leaky abstraction has been removed. One less undefined behaviour in the API, which is a good thing.

Very good point! Thanks for sharing!

You didn’t get my first answer, did you? I am not speculating here, I actually contribute code to one RadeonHD OpenGL driver by myself. There are exactly 23 native primitive types in hardware (that’s more than OpenGL needs).

Interpolation and clipping can be carried out on triangles only. The other primitive types can be converted to triangles in the previous stage. At least it appears to be done so on AMD. NVIDIA might have native quad interpolation, I have seen some slides from them where they said they could do that.

One reason the ARB has got rid of quads might be Intel. Some of their old IGPs (DX9?) can’t do quads. Interestingly enough, all of their DX10 IGPs can. :slight_smile: (I am just looking at their open driver code and they clearly don’t emulate quads there).

Pick 4 points in 3D space. Any 4 points you like. Are they guaranteed to all lie on the same plane?

That’s why quads are trouble for hardware and that’s probably one reason why they’re deprecated and why they should have been killed years ago. A quad is most definitely not a simple primitive; that extra vertex complicates a whole heap of things.

(Same with GL_POLYGON but with the additional complication that the points may not form a convex polygon - it has even less reason to exist because probably 99 times out of 100 a trifan will get you the same result for the same set of points in the same order.)

Working on a driver that uses the public documents of AMD’s ASIC does not mean you understand how the actual hardware works.

You say I’m speculating but you also say “it appears to”, well a bit strange, don’t you think?

No matter what is the reason quads were deprecated and how are they handled by current hardware, it is better we got rid of them. I’ve never seen game developers or other professional graphics software developers complaining about quads being deprecated and that’s enough for me (and I tell you, D3D11 does not have them either).

It’s exact that 4 points in 3D space don’t generally lies on the same plane.

But generally, if we want to display a quad, that is because this is really a [planar] quad :slight_smile:

For example, in 2D a quad is always planar …

PS : like skynet, when/where the ARB as say that the GL_QUADS or GL_POLYGON are candidates for deprecation ?

PS2 : I have read on a lot of papers that Microsoft was not in the OpenGL ARB since mars 2003 and it seem that Intel have deprecated the quads in D3D9 but undeprecated them in D3D10 …

Intel have deprecated the quads in D3D9 but undeprecated them in D3D10 …

What are you talking about? Microsoft controls D3D, not Intel. And Microsoft does not have quads in D3D 10 and above.

Yes, you can make a page on the Wiki. It is for the people, by the people.

What do you mean by the best method? [/QUOTE]
There is repeating index buffer method I mentioned above. You could also use tiny triangle strips with primitive restart, a geometry shader or some other method.

Since NVidia has said in presentations they have native support for quads and it seems strongly implied that ATI might also is there a method that’s guaranteed to hit the hardware through some sort of driver detection? If not then is there a method that usually gives the best average performance across a range of hardware?

As an example, you can look around and find out the best way to use VBOs for transitory data (using orphaned buffers). But I haven’t been able to find anything similar on the best way to do quads now that it has been removed from the core.