Me and my (occluder depth) shadows ...

Dave, I think you need to rephrase. It sounds like you’re talking about blurring a depth map, or having some kind of sliced image approach. The original percentage closer filtering paper had lot’s of info on this (it basically is how you blur a depth map based on those relationships), there are also other shadow texture (not shadow map) approaches which can convolve the image, but fundamentally they seem incompatible.

So… when you say blur the light space map you still need some test to determine where the map will get applied, you cannot simply project it because there is no test. As such you probably need multiple convolved ligth space maps (shadow textures). An approach like this has infact been published.

P.S. I see you mention applying the depth test again after the filter. The devil’s in the details here when you think about how this test will interract with the filter, then there’s the prospect of overlapping filters at different depths, i.e. multiple samples. The fundamental problem seems to me to be that the test to apply the image after you’ve convolved is also the filter. This is a pretty fundamental issue which will involve full PCS or clipping the penumbra somewhere between deep shadow and full light.

Gaby, all soft shadows really are is point shadows done multiple times. All you need to do soft shadows after being able to do hard shadows is more speed. i.e., if I have the hardware to do 100 lights casting shadows in a scene, my scene would probably look better if I had 10 lights casting shadows with 10 samples.

Global Illumination (GI) is not really a step or even a jump from soft shadows. They are a completely different beast done in fundamentally different ways than the raster images which hardware is designed to handle.

The main reason for my pessimism is that GI’s complexity grows faster than linearly with the number of triangles and lights in a scene. Faster than linear growth is a big enemy of being real-time.

The other reason is the hardware accelerators are fast because they work locally and they stream data. GI is fundamentally different because it needs to randomly access the scene as it distributes energy or traces light rays. The main CPU then is a better place for it.

For these reasons, I do not think GI will ever be a part of OpenGL because it requires a retained mode library, and needs to be computed by something fundamentally different than a graphics accelerator.

Of course, OpenGL is becoming more and more like a retained mode library, but it retains streams, not scene graphs.

(Please do not think that I am saying that you think OpenGL will become a part of OpenGL. I was just stating my opinion.)

I am disputing that it is just a ‘jump’ from soft shadows. You could probably add less than 50 lines of code to Doom 3 and have soft shadows, and it uses only DX8 level tech. However, it would probably need next years graphics cards to be fast enough ^_^.

The point is that hard shadows just a special case of soft shadows. They are soft shadows with only 1 sample. There is no logical step (or jump) to GI from them.

Originally posted by dorbie:
P.S. I see you mention applying the depth test again after the filter. The devil’s in the details here when you think about how this test will interract with the filter, then there’s the prospect of overlapping filters at different depths, i.e. multiple samples. The fundamental problem seems to me to be that the test to apply the image after you’ve convolved is also the filter. This is a pretty fundamental issue which will involve full PCS or clipping the penumbra somewhere between deep shadow and full light.

yeah i know. but i think you got what i mean, doing the shadow test in light-space, and render that result into the shadowmap as well is the idea. i know of the devil in the detail, there where the blurring happens actually. thats why i think there the occluder depth from our pocky could be helpful, possibly…

and i’m thinking of some other plans… what we could render all to the shadowmap as additional info for determining distance to edge, angle of the shadowray around the edge, distance to light, and shadowing-factor due to that… hehe

in the end, only multisampling is the real way to go, at least, it looks like that… but if a simple sample could say more than just true,false, that would be helpful anyways…

I was pondering some sort of depthmap blurring, but by having a second channel in the depth map which would hold shadow ‘alpha’.

First capture the depth map and set alpha to 1.0. Then dilate the depth map somehow so that depth values spread out (min depth spreading out over large depths…) and dilate the alpha map so that texels which have been ‘dilated’ have reduced alpha.

so a 1d depth map for a light source at +10 y and a cube at +5 y above flat line would be:

Pass 1:
Depth 10 10 10 10 5 5 5 5 10 10 10 10
Alpha 1 1 1 1 1 1 1 1 1 1 1 1

Then after dilating:
Depth 10 10 5 5 5 5 5 5 5 5 10 10
Alpha 1 1 .2 .7 1 1 1 1 .7 .2 1 1

(multiple dilation passes required!)

I can see how to dilate the depth map
d[y] = min( d[p][q] for p -1:1 q -1:1)
But dilating the alpha at a[y] has to be based on wether d[y] was replaced. Fragment shader time

I have no idea if this would work, but if it was feasible it would need a deep shadow map - if you had an object below the cube but within its penumbra AND below the table it would appear as shadowed by the penumbra rather that the table.

In my current Cg shader I just sample my deep shadow map to get 4 depths, d.xyzw and compare these 4, starting with the 1st occluder (nearest the light stored in w) until I find one the current frag depth is behind. This would have to change to sample the shadow alpha as well and continue checking (.z, .y and .x) if the alpha < 1.0 since the fragment may be inside an occluder penumbra BUT then fully or partially occluded by another surface further from the light.

This could be caffeen talking.

[This message has been edited by pocketmoon (edited 10-15-2002).]

[This message has been edited by pocketmoon (edited 10-15-2002).]

Nakoruru,

Yes, I know all that : I have studie lot of algos that have been made in computer graphics… When I said “jump to GI”, I think that if we want to go beyond raytraced soft shadows to enhance picture lighting realism, we must go in the GI way. I know that it is a global aproach, wich is not, at this time, the one of OGL nor raster based graphic accelerated chips. That’s not the case with ART hardware, for exemple, or massivelly paralel processing hardware, wich are more adapted to global rendering approach and general purpose architecture. But at this time, this hardware are not used in realtime applications. I hope that in few years, graphics chip will be general purpose oriented, like the 3dlabs one… But solutions might come from an hybride approach…

Regards,

Gaby

gaby, i’m currently working on the shift from rastericing to raytracing fully on hw. r300 and nv30 will suck in it, but basically they should be able to implement yet my interface (thought, a full software tracer could implement it far bether, and possibly beat them in speed). the interface is rather simple, and the optimizations a gpu could get to do that whole blindingly fast (well… realtime ) are yet known… that the design fits onto todays hw makes it nice to implement it for the start ontop of opengl (2.0) and then letting hw vendors support extensions directly for that renderingway, till we get some real hw.

i haven’t written much on paper, thought… i really need codingholidays again (like a lan party, but coding together )

Yes, a hybrid approach is what I imagine as well. Use some highly parallel “GI processor” to solve for lighting, then use the results with a rasterizing renderer. This is how people add raytracing and radiosity to RenderMan. RenderMan has a couple of points in the shading language where results from raytracers and radiosity engines can be retrieved.

Of course, for radiosity this could just be a texture map lookup, just like a lightmap in Quake. A raytracer would probably be a little more difficult.

[This message has been edited by Nakoruru (edited 10-15-2002).]

davepermen,

Are you agree that a array of DSP like processor that compute lot of ray/triangle intersection with baricenter and phong normal extrapolation per cycle will ever be faster than a raster system like R300 one for raytracing ?

I think yes, because the first one is made for raytracing, not the second one. So, I think that an array of ray/triangle intersection engine combined with an actual texture access mechanism and video memory structure should be a good generic rendering solution, on wich you should make running both Maya or Renderman, or wathever rendering engine based on ray triangle intersection… But it’s clear that realtime is not for today ! Now, my hope is that, in few years, companies like nVidia will orient their development in this direction, wich mean that a more programable API with retained mode is build, like HLS goes slowly…

That’s my hope,

Regards

Gaby

i’m stating even a simple p4 can outperform the r300 with my design
i just staded that my design fits on a r300, wich is a good base to expand future gpu’s to have some extensions optimizing that design. there are lots of optimisations possible (on gpu only) that allow to boost the performance of my design by a factor of 10, 100, 1000, 10000000000000000000000000000000000000000

no, i mean, getting the amount of needed intersectiontests,rayprocesses,pixelshades down to a minimum can be done quite easily in hw, but not in software…

but the design fits… like opengl fits around rastericing, no mather how its implemented in hard or software…

and it fits on todays gpu hardware. that is important, as it can get gpu developers to possibly add little gl extensions, that can help tiny bits to boost up the whole… gpu developers do like raytracing, both nvidia and ati stated they want to do it. one day… they just scare the big step. i will make that step unneeded…

sounds great, not? well, i feel great currently, so i’m a little babbling. but i’m optimistic about my approach anyways

if i get support, raytracing in good way in hw, possible in… 3 years.

waiting to get a creditcard to get a vapochill and waiting for a a p4 3giga to put it into the vapochill and then waiting to get enough money together for an additional radeon9700. then i can set up my tiny api-design-structure for a little demo…

pocketmoon

what you have perpixel on each shadow is the distance to the occluder, and you fade according to this. how about blur according to this? this is essencially what we talked about in other threads and we thought that would look good… of course, you need perpixel the surface normal to blur into the right directions and dimensions, but i’m sure you could get that. it would look cool,i bet… and you could fade them even as well

Originally posted by davepermen:
[b]pocketmoon

what you have perpixel on each shadow is the distance to the occluder, and you fade according to this. how about blur according to this? this is essencially what we talked about in other threads and we thought that would look good… of course, you need perpixel the surface normal to blur into the right directions and dimensions, but i’m sure you could get that. it would look cool,i bet… and you could fade them even as well [/b]

Yes, I do expand my filter size based on the occluder depth, but not by much. Also the screen shots are low res so it’s hard to see the blurring. I think the key, as you say, is sampling in the correct ‘dimension’. The first demo sampled in texture space, the current demo samples in screen space (ddx,ddy). Both give artifacts when the scale gets big enough to create a penumbra (rather than just softened edges).I should be sampling in tangent space, but I can see that your suggestion of using the surface normal should work

I’m working on a new Cg demo, using shadow volumes for something other than shadows so I’ll probably return to multi-sample shadow maps after that.

I’m at work now and my home broadband has been out due to the rain! It’s back on today so when I get home I’ll post the latest shader code and perhaps Nakoruru can have a go at implementing surface normal space sampling

I might have a go at it if I knew what you meant :slight_smile:

I am having trouble imagining what you mean by sampling in tangent space or using the surface normal. Which surface normal? The occluder or the surface being rendered?

It is interesting that you blur in screen space, but woulnt’ you end up with weird artifacts like when a character is in the water in Super Mario Sunshine and Starfox Adventures… have you noticed how much screen space distortion/blurring is used on the Gamecube? Must be really easy to use the framebuffer as a texture on that system. Anyway, because of the way they do it, things in and out of the water seem to get smeared in a weird way. I would be afraid that blurring in screen space would smear colors where they do not belong.

I guess you mean perhaps to project the sample kernel onto the surface? i.e., if you drew the sample points onto the surface it would look like it was lying flat on the surface instead of oriented like a billboard?