Originally posted by cass:
All hardware I know of converts everything to triangles before rasterization, but primitive assembly hardware in GeForce products recognize all the OpenGL types: QUADS, QUAD_STRIP, TRIANGLES, etc.
Since the hardware recognizes these types directly, there is no extra overhead in using them. That is, the driver does not have to re-order the data before sending it to the hardware.
That’s something I do not quite understand : I knew all cards were doing what you describe.
But when I and others used to create our own 3D engines on Atari/Amiga, none of our algorithms needed to split geometry into triangle… I don’t want to go further into how we rasterized things (if people are interested, simply e-mail me) but we REALLY didn’t have to do that…
Now, of course, our engines were not OpenGL or D3D or Glide or XXXXX. But I do not see where the need for triangles come from… Is it simply a HW constraint ? Or is it a speed constraint (this one I would understand !) ?
Nicolas, I know QUADS are not ideal for color interpolation: with the methods we used, there were certainly occasional strange things… On the other hand, when I ask my GeForce to draw a QUAD with four non-colorspace-planar colors for each vertex, one of the quad diagonals is clearly seen on the screen (that’s where the quad is split into triangles…). So, what’s the best approach ?
There must be something I am missing somewhere : everybody splits into triangles these days…
My question is : Why ?
P.S. : I might try to adapt one of my Atari Falcon 3D engines to PC just for fun…
By the way, if you want to see such a 3D engine implemented in Java, go to:
Equinox was one of the famous groups on Atari ST. They have very good 3D skills !