No way to apply shadow-mapping for point lights?

Hi,

Is it possible at all to apply shadow-mapping for point light sources?
Suppose we use a cube map to hold all the 6 shadow maps from the point light’s perspective. Then we have each fragment’s tex coords(which mirror the fragment’s position coords) transformed into light space to serve as a direction vector to address the cube shadow map. The addressed texel value then holds the window depth coord from the perspective of the face that it belongs to.
The problem then is this: we need to transform the fragment’s tex coords into the perspective of the face, then perform perspective division and range-mapping on the r-coord to convert it to window depth coords from 0 to 1…then we can compare the fragment’s r-coord vs the texel’s depth value. Frankly, I don’t see how this can be done…
Does anybody know if this can be done via some sort of high-level shading language like Nvidia’s Cg NV30 profile etc?

Yes, you can use point lights for shadows, but rather than go the cube map route you are probably better off implementing point lights as 6 spot lights with 90 degree FOV’s.

Works for me anyway, but will depend on your application.

Yeah, I kinda thought the cube map route is a disappointing dead end anyways.
Also thought of using 6 90 deg spotlights to simulate a point light…though the overhead does sound a little scary.
How do you find this in practice?

according to john carmack doom3 will be using 50+ passes.

yikes, sounds like it borders on the insane…kinda puts things in perspective :^

I wouldn’t panic too much about ‘50+ passes’ as, AFAICT, 50+ on a 4 texture card is actually only 13 passes… I think… :slight_smile:

In practice, the 6 spotlight approach works quite well… if the light is behind the cameras near view plane, you don’t need to account for all 6 lights, just the ones facing the same direction as the view vector.

Leathal.

Originally posted by solarforge:
[b]I wouldn’t panic too much about ‘50+ passes’ as, AFAICT, 50+ on a 4 texture card is actually only 13 passes… I think…

In practice, the 6 spotlight approach works quite well… if the light is behind the cameras near view plane, you don’t need to account for all 6 lights, just the ones facing the same direction as the view vector.

Leathal.[/b]

Thanks for the info…appreciate it :wink:

Originally posted by zed:
according to john carmack doom3 will be using 50+ passes.

No offense, but what’s that got to do with shadow mapping? Doom3 uses shadow volumes, which are a completely different beast.

Originally posted by MelvinEng:
Yeah, I kinda thought the cube map route is a disappointing dead end anyways.

I think it’s rather the future. With the new fragment capabilities of the Radeon 9700 and NV30 it will be a piece of cake to render the radial distance (as opposed to depth) into a cubemap. It will be even simplier to set up than for a spotlight.

Originally posted by Humus:
I think it’s rather the future. With the new fragment capabilities of the Radeon 9700 and NV30 it will be a piece of cake to render the radial distance (as opposed to depth) into a cubemap. It will be even simplier to set up than for a spotlight.

How would you go about using radial distance instead of depth ? Would you use the light->fragment vector to index into the cube and compare the radial distance ?

[This message has been edited by pocketmoon (edited 09-10-2002).]

pocketmoon, what you think? makes sence, your sentence, not?

store in each direction of the cubemap the distance from the light to the point (you have simply a floatingpointtexture you render to…)

look up this value perpixel, by generating the point - light vector, calculate its length, and compare… you have full floatingpoint in the pixelshader, you can do it all in there…

You’d still have to render your geometry 6 times just for one light, which is going to cause slowdown.

depending on the future of the hw, you wont… and with clever culling you actually don’t send much more than one time the surrounding scene of the lightsource…

Originally posted by knackered:
No offense, but what’s that got to do with shadow mapping? Doom3 uses shadow volumes, which are a completely different beast.

sorry to clarify it means dont be afraid of drawing 6 views for a light (ie nowadays u shouldnt be afraid as much of doing multiple passes/drawing like in the past)

personally shadowmapping is a much MUCH better idea than stencil shadows (like ive said before IMO stencil shadows r just a quick passing fad)

davepermen,

Am I hearing you right? You are actually defending shadowmaps :slight_smile:

Oh well, on the subject: Not only do you not have to send all the geometry 6 times, but since lights ussually have small radiuses it is a heck of a lot less geometry than the whole scene. You would not have to render geometry past where the light shines, and point lights like, say, a light bulb or the glow of a rocket, are not very bright, and effect very little geometry overall.

It would be so nice if the next gen cards supported floating point cube buffers!

Actually, if you read my post in other thread about shadowmaps, you would know that I pointed out that distance is not important in shadow mapping. All you really need to do is to map a function which will uniquely identify each pixel and then look for mismatches. Because of limited precision, you should be able to use such a function to find near matches.

So, you could quantize the x,y,z light space position of the pixel into an r,g,b color cube map and then use a fragment program to calculate the distance between the shadow map r,g,b and the actual lightspace position determined from some other method (say, a 3D texture or inverse transform of the window space coordinate into lightspace, idunno, be creative ^_^)

If the distance is small, then the pixel is lit, if the distance is too far away, then it is in shadow. This is a lot like color index shadow mapping, but per-pixel.

It would still be nicer to have a float cube map and just have the float result there without losing any precision to quantization.

Hmm, perhaps the fragment program could encode the distance into RGBA and then decode it from that so that you could use the full 32-bits of precision. That would even have more usable bits of precision than a floating point number.

solarforge,

Saying that you only have to render the lights that shine in the same direction as the view vector (I am assuming that you mean that they have a positive dot product) is completely wrong. You can have shadows that are cast towards you from a light in front of you. The light vector and the view vector in this case are in opposite directions.

Originally posted by davepermen:
[b]pocketmoon, what you think? makes sence, your sentence, not?

store in each direction of the cubemap the distance from the light to the point (you have simply a floatingpointtexture you render to…)

look up this value perpixel, by generating the point - light vector, calculate its length, and compare… you have full floatingpoint in the pixelshader, you can do it all in there…[/b]

=>I think we may have a problem here(as pointed out in the 1st post)…the depth values stored in the cube face’s shadow map are parabolic depth coordinates in the window space of the face’s perspective and not radial distances…which means the r-coord of the fragment also has to be parabolic depth coordinates in the window space of the face’s perspective - exactly like what we do for spotlight shadow-mapping (aka projective texturing)…question is how to do this?

>>davepermen,
Am I hearing you right? You are actually defending shadowmaps ^_^<<
hehe, ****, you got me!

>>I think we may have a problem here(as pointed out in the 1st post)…the depth values stored in the cube face’s shadow map are parabolic depth coordinates in the window space of the face’s perspective and not radial distances…which means the r-coord of the fragment also has to be parabolic depth coordinates in the window space of the face’s perspective - exactly like what we do for spotlight shadow-mapping (aka projective texturing)…question is how to do this?<<

you can calculate the radial distance in every direction, store this on the pixel in the cubemap, and look the radial distance in this direction up again, no problem there… just calculate (light.objectspace.pos - fragment.objectspace.pos) in the vertexshader, send this in some interpolator to the pixelshader, and calc there the dp3 with itself… and you got it… wheres the problem again? HEHE

davepermen,

I still dont understand one thing…

Originally posted by davepermen:
pocketmoon, what you think? makes sence, your sentence, not?

What was wrong with my original sentence ?!

[This message has been edited by pocketmoon (edited 09-11-2002).]

The problem with your (vvery good) idea on cube map is the filtering. The best filtering method for shadow map is nearest neighbor filtering and it is not implemented for cube map. Maybe, it can be faked on a fragment program…
Another way is to use big resolution (but a cube map with 512x512 texture per face should be expensive…)

Originally posted by tayo:
The problem with your (vvery good) idea on cube map is the filtering. <SNIP>

NV30 FP