why no shadow cubemaps?

Hi Dorbie,

I think both things can be critical, but the extra math is required if you want to match the 2D shadowmap model (which I think is a worthwhile goal).

You need to do the extra math to compute window space z for the face (which is (A * |ma| + B) / |ma|), and it can be a little tricky because it can run afoul of perspective correction optimizations if you’re not careful.

The plumbing issue is more subtle and implementation dependent, but it can certainly be a real impediment to adding this kind of feature.

At any rate, my purpose for posting was to clarify that I think shadow cubemaps are useful and interesting and will eventually be supported directly in hardware. There are just some quirks about them that make them more complicated to implement than you’d think at first blush.

If they were trivial, we’d already have done them. :slight_smile:

Thanks -
Cass

That’s pretty much what I was saying, but you can’t match z without implicit knowledge about the projection matrix that rendered the cube faces, hence my alternative approach; renering shadow depth in linear space to match the Ma projection.

What I don’t get is that right now we can simulate a shadow cubemap by rendering our own depth values from a vertex program into a normal renderable color cubemap, then in a shadow fragment program do some multiplies and a comparison to see if a fragment is in shadow or not. If we can so easily do this, why is it so hard to do that in hardware? What we do now with a color cubemap works pretty darn well as far as I’m concerned, what the “hard” issuses are I don’t know. The GPU is already capable of doing MUCH more complicated tasks than this. I don’t think that it’s not implemented b/c it’s not trivial, but rather, most well known developers (like Carmack in particular) isn’t crying for them. If I’m wrong let me know, I just don’t see what the big deal is.

-SirKnight

SirKnight, all I can say is that it “just is” more complicated than you’d think. If you’ve ever wanted to make a “small change” to a big piece of software, and found yourself dealing with tons of side effects, you’ll know what I mean. The same thing often happens with “simple” OpenGL extensions. Quite often, even small changes have broad implications.

Certainly if demand was higher, that would have accelerated things.

You know I just thought of something that would be interesting. Take the MESA code and try to implement (unofficially of course) shadow cubemap support to see what it would take to do it right. Then surely designing the hardware to take care most of that would be easier I would think. Of course it would be slow since it’s all software but this would be a nice proof-of-concept.

-SirKnight

Originally posted by SirKnight:
[b]You know I just thought of something that would be interesting. Take the MESA code and try to implement (unofficially of course) shadow cubemap support to see what it would take to do it right. Then surely designing the hardware to take care most of that would be easier I would think. Of course it would be slow since it’s all software but this would be a nice proof-of-concept.

-SirKnight[/b]
I might be wrong, but i am pretty sure ATI and nVidia DO have their own software renderers just to try out all new ideas, before they try to build the hardware. I mean, surely they didn’t just implement shaders and stuff, without testing their usefulness in real life before.

Jan.

Then surely designing the hardware to take care most of that would be easier I would think.
No.

Writing C++ code and writing hardware are two wildly different things. Oh, they’re both programming (to a degree), but hardware works using entirely different primitives. And doing it efficiently in hardware is completely different from doing something efficiently in C++.

We know what cubemap shadowmapping means; that’s easy enough. The hard part is getting hardware to play along.

Of course, you could always do it the ATi way, where you don’t actually implement shadowmapping in hardware at all, instead relying on the fragment shader to do the shadow computations.

One last thing, about the coordinate that specifies the test value. Is it so difficult to just make it so that the texture unit takes another parameter to its shadow processing? Rather than trying to overload the largest magnitude coordinate or some other thing.

Originally posted by Korval:
One last thing, about the coordinate that specifies the test value. Is it so difficult to just make it so that the texture unit takes another parameter to its shadow processing? Rather than trying to overload the largest magnitude coordinate or some other thing.
It’s not too hard to do what you suggest, and that is a reasonable (even desirable) implementation approach given the right underlying microarchitecture.