Dual-paraboloid omnidirectional shadow mapping demo

Without that change to the fragment shader the shadow “worked” on in the sense that there was no specular lighting where the shadow should be.

With the modified fragment shader it looks just like your screenshot.

Good luck! :slight_smile:

(GeForceFX 5800, 67.02)

I tried it a bit more, and it’s the blur variable that always seems to be 1 (or close to it).

If you change the line:
float k = (depth > shadow) ? blur : 1.0;
to:
float k = (depth > shadow) ? 0.0 : 1.0;
it looks correct (and jaggy ;]).

Thanks very much for your feedbacks, mogumbo and gulgi.

As gulgi pointed out, it seems that
the blur filtering shader is the suspect!

But it’s pretty strange because
the ForceWare driver compiles the shader correctly.
And of course, the Catalyst driver also does it, too.

Anyway, I “arrested” it for some questions and
just released the new version 0.88.

Could you try it out?
And please let me know whether it works or not.

Oh, I just got email and heard that…

Now it should run correctly on NVIDIA videocards!!!

I’d like to express my gratitude to all people who helped me at this forum and emails.
Thanks again! :smiley:

BTW, do you think that this DPOSM will be a major shadowing technique in the near future?

John Carmack decided to replace shadow volumes with shadow mapping in the next id’s project.
And it is clear that a dynamic omnidirectional-shadowing technique must be needed in the games.

I think that this DPOSM will be one of the practical and robust solutions for that.

vs traditional point shadowmaps

pros

  • less render targets
    cons
  • objects must be greater tesselated
  • lesser visual quality
  • not as simple

so ild say theyre roughly equal (though havent tested both methods fully, to see which method does give the best performance, i may do this next week (dont hold me to that though))

Honnestly, i don’t think DPOSM will be the “major” shadowing technique. Because of the deformations, it requires a heavy tesselation, which makes it unsuitable for games that already have a high polycount. You’d need to tesselate all the walls and large flat areas.

Personally i prefer to use cube maps. The main drawback with cube maps is the update time, since you basically need to update 6 views instead of 2 in DPOSM. But using a cache system and a priority queue (only updating the most important cube maps per frame), it’s possible to get pretty good framerates. Applying the cube map to the scene is also much more simple since there is no need for any projection of any sort.

You can see it in action here: http://www.fl-tw.com/Infinity/Docs/Demos/et_corridor_ppl_11.jpg
http://www.fl-tw.com/Infinity/Docs/Demos/et_corridor_ppl_14.jpg

Y.

Beautiful pictures, Ysaneya.
I have to ask - what tesselation scheme are you using in your terrain videos?

Demo Runs fine on an Radeon9700pro.

About the matrix stack underflow… i had some problems with a project of mine as well on nvidia cards because i missed doing correct matrix pushing and popping. ati cards are probably more lenient in this respect while nvidia cards seem to tend to produce quite strange behaviour if the matrix stacks arent handled correctly (in my case texture fetches in a fragment program produced qutie strange results).

OT:
Also just downloaded the terrain videos from Ysaneya and all i got to say is wow :slight_smile: … i already thought about implementing dynamic fractal terrain generation to render whole planets but never found the time to really start this project. So its just great to see, that it does work really well and generates wonderful results.

Now it works on my 6800GT. Good work slang. Is there some difference in the way ATI and NVidia implement GLSL that was causing the problem?

Originally posted by Ysaneya:
Personally i prefer to use cube maps. The main drawback with cube maps is the update time, since you basically need to update 6 views instead of 2 in DPOSM. But using a cache system and a priority queue (only updating the most important cube maps per frame), it’s possible to get pretty good framerates. Applying the cube map to the scene is also much more simple since there is no need for any projection of any sort.
Yes, certainly the DPOSM requires well-tessellated scene
in order to transform into the dual-paraboloid light space.
But it has the advantage of the ease of doing soft-shadowing.

Though the quality of soft shadows in my demo is not good,
however, you can do soft-shadowing using only 4 textures with the DPOSM.
Two shadow maps, and another two blurred edge (silhouette) maps.

To tell the truth, I haven’t implemented the cube-mapped omnidirectional shadow mapping (CMOSM?)
but I think that it is too expensive to do soft-shadowing using the CMOSM
because you need totally 12 passes (or more?) to do it.

And the CMOSM doesn’t requires any vertex transformation but
it is commonly needed in the both DPOSM and CMOSM
to visualize the distance between the vertex and the light source
because, you know, currently it is impossible to store the depth values into textures directly
due to the non-support of GL_DEPTH_COMPONENT format with GL_TEXTURE_CUBE_MAP.

So I think that it is almost equal performace;
to transform all the vertices into the dual-paraboloid light space
and store the depth into 2 textures and;
to visualize the distance between the vertex and the light source
and store the result into 6 textures.

Anyway, your screenshots are pretty cool. :cool:
I gotta implement the CMOSM too and see which works better.

(And sorry for my poor English. I usually speak Japanese.)

Originally posted by mogumbo:
Now it works on my 6800GT. Good work slang. Is there some difference in the way ATI and NVidia implement GLSL that was causing the problem?
I’m glad to hear that.

Well, the issue was caused because I had forgot to turn the texture units off. :eek:
So after all, there was no problem in my shaders but main programs.

I’m sorry that I made such a disturbance to you all people here.

Seems like I just found a bug:
ScreenShot
It’s appears when two shadows being connected.

@Ysaneya: what navigation controls have you used to record this smooth animation? Cursour keys & mouse & …? How do you influence the flight speed, direction, …?

Originally posted by SKoder:
Seems like I just found a bug:
ScreenShot
It’s appears when two shadows being connected.

That’s why the demo version is NOT 1.0.

Well, it cannot be avoided that the artifact emerges with the current silhouette detection algorithm.
So I’m now doing research on another algorithms to solve that.

Do you have any ideas?

That’s a bit OT so i hope nobody will complain, but here are some answers about my planet engine:

knackered:

I have to ask - what tesselation scheme are you using in your terrain videos?
Geomipmapping with 33x33 patches. The LOD calculations are basically done in “flat space” (ie. a plane), which is then later distorted to a piece of a sphere. A whole planet is made as a cube (6 “faces”) deformed into a whole sphere. This avoids distortions at the poles.

The heights are calculated by 18 octaves of Perlin noise, which i’m in the process of improving now (it’s far too repetitive at the whole planet scale).

The LOD engine is pretty fast. When geomorphing is disabled and polycount upped, i get up to 70 millions triangles per second on a P4 2.8 Ghz + Radeon 9700.

Chuck0:

So its just great to see, that it does work really well and generates wonderful results.
Thanks, but that’s still very far from what i’m trying to do. What is shown in the video is the geometric engine. It’s missing lighting/shadowing and texturing. It’s also lacking the atmosphere, the details (vegetation), the water, the clouds, etc… I hope to get something half-decent in the next 6 months. If i succeed i’ll post a newsbit here on OpenGL.org.

Hampel:

what navigation controls have you used to record this smooth animation? Cursour keys & mouse & …? How do you influence the flight speed, direction, …?
In real-time, movement is newtonian based (your typical rigid body physics). To record the path i press a key, which adds a “waypoint” to two splines (one for position, one for view direction). When recording the AVI, the splines are simply played back, independantly of the framerate (which suffers a lot due to writing the video to the disk).

Y.

Geomipmapping with 33x33 patches. The LOD calculations are basically done in “flat space” (ie. a plane), which is then later distorted to a piece of a sphere. A whole planet is made as a cube (6 “faces”) deformed into a whole sphere. This avoids distortions at the poles.
yeah but bunches them up in the corners :wink: , (joking) they can be minimized

The heights are calculated by 18 octaves of Perlin noise, which i’m in the process of improving now (it’s far too repetitive at the whole planet scale).
perlin noise, whilst giving sort of acceptable cursory results doesnt really cut it compared to how real terrain is generated eg glaciers/vocanic/sesimic activity, do u know of any better looking (but still reasonbly fast) methods. please dont mention vterrain.org

yeah but bunches them up in the corners :wink: , (joking) they can be minimized

Yeah that’s true :slight_smile: But you know, at a planet scale, the chances to land near a seam where it’s visible is pretty small.

do u know of any better looking (but still reasonbly fast) methods. please dont mention vterrain.org

Texturing and Modeling: A procedural approach, is giving some pretty good hints. Their multifractal terrain is looking quite good i think. Check out their Gaia zoom at http://www.kenmusgrave.com/animations.html

Although multifractal gives better results than pure noise, i do believe it’s only good for terrains at the kilometer level. At the planet scale, you must take into account different things, like the latitude/longitude, probability to have deserts of sand/ice, temperature, humidity, etc… I do not have a precise answer except that i believe the best solution would be a combination of planet-scale parameters, multifractal, and then noise for ground details.

Y.