Adaptive Shadow Maps

maybe, if this is the case, than maybe it is really a good method. ill try it.

The reason people are saying it’s CPU intensive, is because the paper recommends reading back the zbuffer to main memory and searching for the smallest z value. Then re-render the scene with an adjust front clip plane to get maximum precision. They say it “only” takes 10ms, which is a lot for games of course! Still sounds like a good idea (sans the zbuffer reading).

Ok, the readback and z-buffer optimization does sound rather problematic. The main problem with that is not the CPU expense really, but the loss of syncronization (which happens without WGL_render_texture and WGL_depth_texture for traditional shadow mapping as well).

I guess its not completely hardware accelerated, but I still think it is the best shadow method discovered yet.

Let’s just try it out, so we can prove if perspective shadow maps are better than shadow volumes. I was already starting a small demo yesterday where I tried to render the shadow map using the post-perspective matrix (described here: http://www-sop.inria.fr/reves/research/psm/ ) but it is still looking strange. Did anyone already have more success with building the post perspective matrix?

Thanks
LaBasX2

Definitely, no sense in arguing over it without any implementations.

Originally posted by Nakoruru:
This method also handles directional lights, which I have never seen done using shadow maps at all.

how about rendering the shadowmap with glOrtho? shadowmaps for directional lightsources are EASY…
on the other hand, i have never yet seen a implementation of pointlights in acceptable speed, as you need to render the 6 sides of the cubes…

shadowvolumes will not be doomed, as they help in much other situations as well…

shadowvolumes are the most generic solutions to shadows… not the fastest. but they are the only generic perpixelaccurate solution for rastericers. the “fastest” method for exact shadows would still be cough raytracing… (compared to the rest of the rendering…)

Our definition of generic may differ, so it may be pointless to argue, but I say that shadow maps are definitely more generic because they work with anything you can render. Shadow volumes require a different code path for each object representation you may want to use, hardly what one ussually considers general. They also fail on alphatested primitives.

Of course one could use glOrtho to make traditional shadow maps for directional lights, but unlike spotlights, the size of the map you would have to create and what needs to be rendered to it is a big question. With traditional shadow maps, you render everything the light shines on, but for a directional light, that is everything! At least with an omni light you can respresent that using a cube map, but for a directional light, you may need to sqeeze the entire scene into your shadow map in some cases. Since PSM only renders what is in the post perspective space cube, it only needs to render one shadow map for any type of light, including omni lights.

I have never seen anyone implement cube-map omni light shadow maps anyway, and with PSM, I see no reason why they ever would need to because in any case PSM would almost certainly be faster because it only requires that you render the scene once per light not six.

Someone needs to make a fair shadowmap/shadowvolume comparison demo, but it won’t be me because i currently feel that implementing stencil shadows would be a waste of my time :slight_smile:

Implementing stencil shadow volumes is not, in my opinion, a waste of time. The next generation of hardware is specificly optimized for rendering to depth/stencil. With some hard work, shadow volumes can be fast. In fact, I’m constantly surprised by how fast current hardware is, even if you don’t do any optimizations.

Both methods have good and bad points, the solution is not to discard one of them but possibly combine them.

Originally posted by Nakoruru:
Shadow volumes require a different code path for each object representation you may want to use, hardly what one ussually considers general. They also fail on alphatested primitives.

yes, the alphatest doesn’t work. no, no different codepaths needed, and yes, its all on the gpu here (as soon as i have at least a gf3 ).

for alphatested primitives, shadowmaps work bether, but for real transparent objects, they have the same problems shadowvolumes have… so not REALLY generic, your approach…

Of course one could use glOrtho to make traditional shadow maps for directional lights, but unlike spotlights, the size of the map you would have to create and what needs to be rendered to it is a big question. With traditional shadow maps, you render everything the light shines on, but for a directional light, that is everything!

its just the whole scene, but thats all the time for global lights…

[/b]At least with an omni light you can respresent that using a cube map, but for a directional light, you may need to sqeeze the entire scene into your shadow map in some cases. Since PSM only renders what is in the post perspective space cube, it only needs to render one shadow map for any type of light, including omni lights.
[/b]

yeah, the psm are much bether for shadowmapping, there is no point in this… (including point lights i’m comming to this soon )

I have never seen anyone implement cube-map omni light shadow maps anyway, and with PSM, I see no reason why they ever would need to because in any case PSM would almost certainly be faster because it only requires that you render the scene once per light not six.

… what if the lightsource is IN the unit cube? there is NO WAY to get the shadowmap then as you once again need a frustum bigger than 180 degrees. and thats impossible, with or without PSM. and thats why shadowmaps arent generic at all. they work as long as they are eighter some spotlights (directional or omni lights) or, now with PSM with much higher quality, outside of the scene…
there is STILL no way for a simple generic point light shadow map in the scene.

[b]
Someone needs to make a fair shadowmap/shadowvolume comparison demo, but it won’t be me because i currently feel that implementing stencil shadows would be a waste of my time ^_[1]

first provide me a solution for a simple point light in the middle of a room with some objects floating around it. with only one rendertodepthtexture. you should use PTM, but that doesn’t help you for getting it over 180°… prove me wrong…


  1. /b ↩︎

You have a good ‘point’ Davepermen, I’ll have to think about it.

EDIT:

Thinking over :slight_smile:

Okay, I see now that it would require a cube map if the light is in the view frustum. I do not understand why you consider this flaw enough that you cannot consider shadow maps to be generic. Big deal, now I have two cases in my code. My view is that it’s one case, with an optimization for lights outside the view frustum.

I really do not want to get into a big contest over which is more general (but don’t worry, I will anyway). Sometimes it’s just a matter of opinion or marketing. But my opinion is that when I have to write a vertex shader to do shadow volumes for each vertex shader I write, that the code is not general. When I have to come up with a different method to render point sprite shadows and alphatested polygon shadows, stenciled polygon shadows (if I could even use the stencil buffer for something else), and frag-killed polygons, then its not general.

Even if now the idea of using a cube map for each point light is bad, the increases in fill rate and memory will make it worthwhile sooner than we may think.

The simple fact is that shadow maps can shadow anything that you can throw into the framebuffer with Z while shadow volumes can only shadow what you can describe as vertices and edges.

Shadow maps will get better as memory, fill-rate, and precision improve. Stencil shadow volumes are about as good as they are going to get.

I measure generality in code paths. I could write a shadow map renderer with one code path (excluding paths for different extensions), basically, just render the shadow map to a cube map. Hardware will eventually make this fast enough that I would not even have to optimize for cases where a 2D texture will work.

In the worse case, if I implement a stencil shadow volume renderer, I would have one code path for each vertex shader, because I would need them to transform and extrude the silouette edges. I am not sure how one goes about capping the volumes for each case. In the best case I would just transform all my vertexes the same way, but then what is the point of programmability?

So, in my opinion, stencil volume shadows are a temporary solution until precision and memory problems are sorted out in hardware. They are doomed. They are doomed soon enough that I do not see the use of starting a new project that uses them.

[This message has been edited by Nakoruru (edited 08-07-2002).]

So perspective shadow maps also aren’t the ultimative shadow solution as dave found out…well, would probably also have been too nice…

BTW, Nakoruru, there is a demo with source that shows how to use shadow maps for point lights with a cube map; it is at:
http://freefall.freehosting.net/downloads/cubicshadows.html

LaBasX2

Thanks for the link, although I think I was complaining that I had not seen directional lights implemented or even explained.

EDIT: It says there is no hardware support for cubic depth maps. This surprised me at first, then I realized why (the hardware relies on the texture being projected in a certain way). Thats a bummer, I guess I really will need a next generation card to implement all my plans.

[This message has been edited by Nakoruru (edited 08-07-2002).]

Originally posted by Nakoruru:
It says there is no hardware support for cubic depth maps. This surprised me at first, then I realized why (the hardware relies on the texture being projected in a certain way). Thats a bummer, I guess I really will need a next generation card to implement all my plans.

well, next gen hw doesn’t have any problems to implement them in hw (if YOU get the math right… there are some funny problems but you can solve that…)

the major problems are
a) you NEED to render the scene 6 times, and, as long as hw doesn’t support “cubic rendering” that is just hell of a slow thing. (while shadowvolumes are only one additional pass in next hw, thought…)
b) the cubemap normally will not be that highres, will it? i mean, rendering 6 highresimages… for each light? thought
c) i have NO idea yet how to get the csm (cubic shadow map) done in pps (post projection space). but i bet you get at least that.

a and b are problems as long as there is no cubic shadow map support, c can be solved (and solves b partially)

think about that:
512x512x6x4 bytes should be more or less the size of a cubic shadow map to look… good… (with pps shadow mapping it should work good). that is 6mb of space for each light. and 6passes extra each light… shadow volumes are much less memory eating, and much less pass eating, and are always perpixelaccurate… for today, they are THE solution (but when we come to shadowing of others than pointlights (directionals are pointlights as well… just infinite far away), but move to area lights, shadow maps could be far more helpful…)

and there is still the raytracing approach, wich can get more and more interesting each new hw generation (i’ll get a nv30 just for raytracing in hw… )

maybe it is better to use both methods. one for each light source. i think point lights are very important for a scene and cant be passed. spot light are more of fine tuning things.

Working some stuff out on paper, trying to determine how drawing a cube shadow map in post perspective space would work (more on that later) and incidentally determined that spotlights that have an angle of greater than 90 degrees can be stretched out further than 180 degrees in post-perspective space if they are inside the view frustom and the view frustum has a FOV greater than 90 degrees.

This can probably be generalized by saying that a spotlight with an angle of N degrees can be over-stretched in post perspective space if the FOV is greater than 180-N.

This means that with perspective shadow mapping, one may have to use cube-maps even for a spotlight if that spotlight is inside the view frustum. Luckily it only happens when one uses wide spotlights with wide fields of view.

As for the broader work of determining if shadow cubemaps can be draw in post-perspective space, I have almost concluded that it’s no problem and has the same benefits as non-cube-maps in that it provides more detail where its needed. You just draw the cube-map from the position of the light in post perspective space.

Now if only support for cube-shadow-maps where to appear.

shadowcubemap support is yet here, just low precision (this shouldn’t be such a problem in pps anyways as well…), and you can even do highprecision of it with some math. the problem is the rendering of the 6 faces. as long as there is no hardware that can render cubemaps, i don’t see much future in generic shadowmaps… even while this pps space shadowmaps solve one of the big problems, the precision, it doesn’t solve yet the genericity… too bad. we have to wait for nv35 or nv40 for this…

Davepermen,

Hardware has been able to render cube maps for a long time. It just takes more passes than just one of the faces :slight_smile:

I was thinking the same thing as jwatte.

Do you mean something better than rendering to a pbuffer texture? I mean, you -can- render to a cubemap, you just have to make 6 calls to glPbufferAttribute to change faces.

I guess you mean that you want to be able to send all your geometry once and have it render across all 6 faces.

That would be cool (really freaking cool!), but I do not think having to render one face at a time is a huge bottleneck if you have a good visibility determination method because you would not actually end up sending most of your geometry more than once.

I do admit that some situations would be really bad, and all-at-once-cubic-rendering would solve that.

My first stab at how it would work…

You would need 6 sets of geometry state, one for each cube face. You would want to use the same vertex program, but would need to have 6 different tranformations. When you send geometry, it is transformed by all 6 programs using the different matrixes and then sent to each of the 6 pipelines for clipping. Polygons on the edge of the screen would end up being rasterized on more than one face.

The really cool thing about this is that it could probably be generalized to render any 6 transformations you want to 6 different buffers. You could do something like softshadows, motion blur, depth-of-field in parallel instead of in series, if you had a good way to composite all 6 framebuffers.

Does anyone see this type of massivly parallel hardware existing anytime soon.

latest cards already have several parallel vertex processors and frag processors so it’s not too far off - not sure if the triangle setup is currently as parallel - i suppose it must be…

anyway, as soon as we have a card with 6 vertex units and 6+ fragment units, with a bit of extra state tracking (ie all units need not have same program running) it could maybe write to a cube map at 1/6 normal speed.

would still be faster than current setup 'cos of no context switching between pbuffers.

current dx9 cards (eg R300) can write to up to 4 rendertargets at once, but i think they all share the same rasteriser, so that would need fixing… currently you can write color to one pbuffer, normals to another and depth to another, but all with same geometry.

you don’t need 6 rastericers. you just need one (as one triangle occurs on one place on the cubemap anyways). it just have to generate the proper scanlines for each square, and the thing is done. it would be cool if cubemap rendering would not need to divide the scene into 6 parts, render 6 frames and copy 6 times. would help much for fast dynamic reflections, shadowmaps, and more… (you could finally blur the cubemap good in hw as well, for diffuse reflectionmaps)