I seem to remember some complaints against futuremark for using the GPU to do skinning multiple times, once per object to fill the depth buffer, and then once per object per light for stencil shadows.
As I understand stencil shadows one does something like this to draw them:
draw scene to depth buffer
for(each triangle in mesh)
calculate which direction it is facing
if(triangles face different ways)
draw quad from original edge stretched projected away from light
Front facing increments stencil, back facing decrements…
How can you offload this processing to the vertex program?
Double each vertex, put the light in the vertex program constants and project every vertex away from the light and then multiply by an extra bit of per vertex data to effectively cancel out that computation?
There is a way, though you don’t get perfectly accurate shadow volumes. To generate shadow volumes, you do is determine if a vertex’s normal is point towards or away from the light. If it is pointed towards, you T&L as normal (granted, since this is the volume pass, no lighting). If it is away, you move that vertex “infinitely” far from the light along the vector from the light towards the position that the vertex would have been in had normal T&L occurred.
This method works best, of course, for high-poly models. But even moderately low-poly models can have reasonably decent shadows if you pull the vertices far enough away.
Ahh… It finally just clicked.
I like the idea of putting a quad on every edge of my mesh and giving the verts the normals of the adjecent triangles and tossing that off to the vertex program.