Fast silhouette detection for shadow volumes

Hi!

At the moment there is only one algo for detecting the silhouette of a model that is needed for shadow volumes that I know. This algo does the following:

Loop trough all faces of a model and check if face is lighted; if a face is lighted, loop through all its edges and check if the neighbouring face is not lighted (using connectivity info); if the neighbouring face is not lighted, we have a silhouette edge.

Unfortunately this algo is pretty expensive for complex models. Is there a better way to detect silhouette edges on the cpu?

Thanks in advance
LaBasX2

yes. for each edge store its adjanced edges and once you’ve found a silhuette edge, follow those to fast find the whole silhuette…

something like that…

hi!

i don’t think there are any “general case” algorithms that can do this more clever way…

the only optimization i see is to keep previous silhouette’s casting edges and instead of computing edges from the beginning try to update those previously computed…

-when casting object doesn’t move, the silhouette stays the same (assuming light position didn’t change)…

-when position of light or casting objects changes the shadow volume’s edges (i think you know what i mean - i wish i could express myself better) is “sliding” from one edge to neighbouring face’s edge…when movement is relatively slow (imagine very slowly spinning sphere and static light) “casting edges” need only a little update - this doesn’t requier computations for every face, but only for some of them…

this is not big optimization for fast moving objects/lights in the scene, but better this than nothing…

yep - another thing could be precomputing somehow dot products (JC is an expert of this), which is most expensive operation…

What about this algo:

preprocess:
store for every edge:
find it’s neigbour edge and store it with a flag (could be a bit) in an edgearray.

1.pass:
for every face:
compute lighting/culling info (true or false)
for every edge of this face:
store this flag in your edgearray

2.pass:
for every edge:
if current edgeflag == true
get neighbour
if neighbours edgeflag == false
extrde edge

i think this algo is a little bit more cache-friendly. You can also easily reuse your edgearray when the geometry and lightposition are not changing.

In this paper there is a description of ‘Fast Silhoutte Extraction’. I don’t remember it but I think they store the edges on a tree… http://research.microsoft.com/~hoppe/silclip.pdf

Hope this helps.

Originally posted by LaBasX2:

Loop trough all faces of a model and check if face is lighted; if a face is lighted, loop through all its edges and check if the neighbouring face is not lighted (using connectivity info); if the neighbouring face is not lighted, we have a silhouette edge.

Don’t forget this is true for convex models only…

Julien.

For static models, try normal masks ( for clustered backface culling ). I’ve previously used that.

You can get quite good performance using SSE ( and probably 3DNow! too ). It’s really not worth the trouble implementing higher level optimizations for this particular problem ( in general ). I’ve looked into a couple of methods but the data structures were too costly to update for deformable models. For complex deformable models a brute-force GPU approach is likely faster.

Originally posted by deepmind:
[b] Don’t forget this is true for convex models only…

Julien.[/b]

You would need to find multiple lit regions. All these methods would require tesselation it seems (another performance problem there)

V-man

Originally posted by deepmind:
Don’t forget this is true for convex models only…

Are you sure ? I don’t see how concave models could yield to errors.

LaBasX2: If you’re NOT using Carmack’s reverse, you could naturally speed up your silhouette detection by using a simplified mesh.
I mean, you render your “lit & textured” mesh with the normal “full-res” mesh, but you perform slihouette detection and project edges in the stencil with the “simplified” mesh.

Obviously this works better with static meshes since you’re not likely to be computing a mesh simplification at each frame for dynamic meshes (hopefully).

It’s not going to be useful with Carmack’s reverse because the Carmack’s reverse algorithm needs capping, so you need to match depth fragments in the multiple passes at the caps.

Without Carmack’s reverse it’s still not perfect, but the error is relatively low depending on your simplification algorithm, and depending on how it is important on your scene, eg in a game the hero could have more detailed shadows than a simple chair or door.
For instace, in the game “Taz Wanted” for PS2, the main character Taz is rendered with ‘strong’ silhouettes to give a cartoon effect, but most of the rest of the game is rendered without this silhouette technique.
Then in your application, you could perform brute-force silhouette detection for “full meshes” of objects that are important in your scene, and perform silhouette detection for “simplified meshes” of objects that are less important.

[This message has been edited by vincoof (edited 06-06-2002).]

Originally posted by vincoof:
Are you sure ? I don’t see how concave models could yield to errors.

Imagine a shape like two spheres stuck together and view it from an angle where one sphere partially occludes the other. Each sphere will have a silhouette but the concave part prevents the two silhouettes from connecting. Your algorithm will find only one.

Best is to use 3dNow/SSE, I have implemented this and got some 700% increase in speed than an optimized c++ version. This is no longer a bottle neck in my engine because of sse/3dnow implementation and the bottleneck happens to be the fillrate.

-Sundar

If you run this algorithm on concave geometry, you may end up with more than one silhouette loop, but that may be OK.

ASCII art:

123456
A /
B/ /\ <== light direction
C____/

There is a silhouette made out of the
sides at B6 and C6. There’s also one made
out of the sides at A3 and B4. However, for
closed manifold geometry, these individual
silhouettes will be well behaved (assuming
you have a counting stencil buffer, not just
a single bit)

That being said, ways of accelerating silhouette finding include mostly more advanced data structures; for example you could look into winged edge representations. It’s a shame you need the post-skinning data for getting the silhouette, though, which means you HAVE to do skinning on the CPU.

Originally posted by jwatte:
…ways of accelerating silhouette finding include mostly more advanced data structures; for example you could look into winged edge representations.

How is that going to improve silhouette extraction ? Sure, you can find connected vertices, edges and faces efficiently using winged edge but most of that infomation is not of any use for silhouette extraction. You still need determine the facingness of the mesh triangles and select one, two or three possible silhouette edges.

Or perhaps you have something else in mind with the winged edge data structure ? If you’re thinking about following edge to build the silhoutte then I’ve had a specific case where two silhouette loops met at a single vertex ( making that approach difficult/impossible ).

Paul,

Perhaps I’m confused, but what difference does it make if your silhouette loop goes through the same vertex more than once? You just don’t want to traverse the same edge more than once.

Thanks -
Cass

Hi Cass,

It was just an approach I tried in early 2000. I tried selecting an edge and following it through the mesh ( adding new edges to the loop ). Now that I come to think of it, it might have been a failed capping approach ( very likely ). I still have a screenshot of the “troublesome” silhouette .

Do you think there’s a benefit to creating the silhouette by joining segments like that ?

Paul,

There’s certainly a T&L and geometry transfer benefit to rendering as strips. Two-sided stencil will probably help cover up bubbles like that in the pipeline since the setup engine will not be throwing triangles away. Without two-sided stencil, though, you have no way to avoid those pipeline bubbles, and having to re-transform the same vertex will make bigger bubbles.

Short answer: yes.

Thanks -
Cass

Thanks for all your replies!
Probably I will first try to do some SSE/3dNow optimizations now since I don’t know if involving more complicated data structures is really worth the work…

Thanks again!
LaBasX2