After looking over the way the triangle normals need to be calculated for bone animated meshes, I really do think that it may be a bigger benefit to transform vert positions on cpu(keep transforming vertex normals on hardware), then use the transformed verts to calculate the new triangle normals. I already implemented this method and it isn’t as slow as it may seem at first. This has the added performance bonus of not transforming verts multiple times for each lighting pass(though it still is doing it for normals, I may consider doing those on the cpu as well).
[This message has been edited by AMahajan (edited 02-27-2003).]
Generating the triangle planes from the transformed triangle mesh vertices should be straight forward as I explained in my earlier post.
The issues of vertex programs are an interesting one, so I can offer my advice on what I am currently doing with the Abducted engine.
It is possible to compute silhoutte edges in a vertex shader, but this is tricky and has the potential for a large triangle overhead when drawing shadow volumes. In a typical scene, shadow volumes can account for 1/3 to 1/2 of all triangles.
My suggestion is to break it into two stages: CPU skeletal transformation and calculation of triangle planes, and from that calculation of silhoutte edges.
By using 4 float points, you can insert 0’s or 1’s (as the w coord) with the CPU as inputs into the vertex program that will extrude your volumes. You can insert 1’s for vertices that lie on a silhoutte boundary, and zero’s for all others.
At the vertex program level you can choose to extrude your mesh by moving your vertex along the vertex->light vector a given distance. You can also insert a 0 or 1 into your w coordinate to produce “infinate” shadow volumes.
Nvidia has excellent papers on infinate shadow volumes.
If the code for the skeletal transforms are reasonably well optimized, then it won’t be a bottleneck. Typically your scene is going to be heavily fillrate (and thusy graphics card) bound.
[This message has been edited by JoeR (edited 02-27-2003).]
Just a quick correction to a misconception earlier on this thread. If you only use the face normals for backface culling and silhouette detection, you don’t need to normalize them. A mere cross product will do just fine.