I’m a new member, so sorry if this is the wrong part of the forum, but here is my problem:
I’m an engineer and I write software that is used to display Finite Element (FE) meshes. These are made up of a mixture of trias and quads, arbitrarily connected at their vertices, and there can be several million in a typical mesh.
For engineering purposes rendering works best if one uses “flat” lit primitives, ie each quad or tria has a single outward normal that is used for all its vertices. (Rendering speed is everything, and clever graphics are neither helpful nor wanted.)
At present I compute these normals and store them as signed byte triples (an accuracy of one part in +/-127 is good enough) then, qualitatively, to render a quad I do:
& so on to
And it all works fine.
However there are two problems with this approach:
#1 I have to compute and store all these normals, then send them down to the graphics card. This costs me a lot of storage and also a lot of wasted bandwidth.
#2 I can’t use Vertex Arrays for the normals since I only have one normal per quad/tria, rather than one normal per vertex. (Am I right that there is no way to use Vertex Arrays with “per quad/tria” rather than “per vertex” normal data?)
As far as I can see there is no existing intrinsic OpenGL command that will compute normals for me (leaving aside Bezier patches and/or Nurbs surfaces which would be a major over-kill for this) - am I right about this? It’s frustrating since it is such a simple process and the graphics card could do it so easily.
I’ve also looked at vertex programming, and it’s really frustrating that all the mathematical operations I need (cross products etc) are there, but I can’t see any way of using them on a “per quad” as opposed to “per vertex” basis.
Am I missing something blindingly obvious? Or perhaps someone has already come up with a clever solution to this problem?
Any input would be welcome.