GeForce4...

this is not at all a problem for faked shadow volumes. there where you expand the backfacing vertices… there you always generate a volume behind yourself. it just has to be expanded far enough (onto a far away sphere for example…)

Extending the vertices is not enough. The edges and triangles that use these vertices is the real concern. These will intersect the sphere, even if the vertices are extended far enough.
Imagine what happens when the lights distance from a surface approaches zero ( how far would you need to extend ? And it won’t be the same for all vertices. )

the face near to the light does not get affected at all cause the normals face to it. only the faces watching into the other direction get stretched away. and, for normaal meshes, they are reasonably far away from the lightsource to get squished enough. you can even stretch by the inverse of the length if you want or something when i have my gf4 i’ll see myself how its done the best way…

Originally posted by PH:
The closer a light is to a surface, the farther you need to extend the silhouette quads and back capping triangles. This paper mentions the clipping required,
http://developer.nvidia.com/view.asp?IO=cedec_stencil

The advice in the above document is not valid in the context of NV_depth_clamp. You can extend your shadow volumes to infinity without worrying about them being clipped by the far plane.

Cass

Yes, but I didn’t want to mention anything about infinity ( you know why ).

EDIT:
I would still recommend clipping the shadows when the light becomes stationary and extending to infinity when/if they move ( my current approach ).

[This message has been edited by PH (edited 02-23-2002).]

What cards will support the GL_NV_depth_clamp ext? I know the gf4 ti but will, but will it also work with the gf3 and possibly gf2? Maybe i just dont completely understand but if we use nv_depth_clamp with shadow volumes will this eliminate all the fancy stuff we have to do if the volume intersects the near/far clip planes? If so that would be great!

-SirKnight

The GeForce3 supports it, don’t know about the others. But even if they do, there are still cards from other companies that mat not.

A quick and dirty hack that avoids fancy clipping is to use the “min” operation on the transformed Z component to pull it in closer than the far Z plane. You can do this with a single instruction in a vertex shader. Thus, you can generate the shadow volume geometry entirely in a vertex shader, by extruding vertices with away-facing normals, and clamping the Z component hither of yon. Both these techniques are imprecise hacks, but I think the gain of keeping it all in hardware might be worth it.

GF3 and beyond support NV_depth_clamp in hardware.

  • Matt

GF3 and beyond support NV_depth_clamp in hardware.

Darn i was afraid of that. O well.

Also Matt i have a question for you. I was just recently messing with some (well trying to anyway) different extensions in that opengl extension pdf thingy and i came across two that to me seems odd that i couldnt use them. They are: EXT_texture_compression_s3tc and EXT_multi_draw_arrays. I thought my GeForce 256 supported s3tc texture compression but i get an error back from the glh init extension func. Am i just wrong about the gf 256 supporting s3tc? And about the other extension, why couldnt my card support this? Its not like a special feature like texture shaders that is a special part on the chip. It just allows us to draw multiple lists of vertices with one function call. This seems like something simple that any geforce could do in the drivers. What is up with this? Thanks.

P.S. I’m using driver version: 23.11. I think that is still the newest drivers for XP.

-SirKnight

[This message has been edited by SirKnight (edited 02-23-2002).]

SirKnight,

Make sure this isn’t a glh problem. Did it work on previous drivers and then stop?

Thanks -
Cass

Well Cass i never tried it on any other drivers. The ones i use are the latest official ones from nvidia for XP. It works fine for the other extensions i use like NV_vertex_program, NV_register_combiners, etc. Maybe ill try out some of those “leaked” drivers. A while back i tried some of those “leaked” ones, i dont know what version, 27.00 or something and it made my main opengl test demo all screwed up. It even made my games slow but its beta and unoffical so i cant say too much.

Im not sure what version of drivers i should try. But i guess ill try something.

Also Cass i want to ask you something, about where is the nvidia office you work at in Austin located? Every day i drive through south austin and downtown since i go to the Rio Grande ACC campus and even when i sometimes go to other parts of austin i have never seen an nvidia building. Im just curious thats all.

-SirKnight