Help on dual-paraboloid shadow mapping (code & screenshot included) *UPDATE*

having read several tutorials and papers on the topic, i decided to give it a try.

i render my geometry twice, the first time in the lower half of the texture, the second time in the upper half, incrementing ‘u_Pass’.


vertex shader:

uniform int u_Pass;
uniform vec3 u_LightPosition;

uniform mat4 u_ViewMatrix;
uniform mat4 u_ProjectionMatrix;

varying vec3 v_Position;

void main()
	gl_Position = u_ViewMatrix*gl_Vertex-vec4( u_LightPosition, 0.0 );

	float Length = length( );

	gl_Position /= -Length;

	gl_Position.y *=-1.0;
	gl_Position.x *=-1.0;

	if( u_Pass > 0 )
		gl_Position.z *=-1.0;

	gl_Position.z += 1.0;
	gl_Position.xy /= gl_Position.z;

	if( gl_Position.z >= 0.01 )
		gl_Position.z = Length/20.0*1.02;
		gl_Position.w = 1.0;
		gl_Position.z = -1.0;
		gl_Position.w = -1.0;
	gl_Position.z = 2.0*gl_Position.z-1.0;

	v_Position =;

fragment shader:

varying vec3 v_Position;

void main()
	if( length(v_Position.xy) > 1.005 )

resulting depth map:

  • does all this make sense?
  • atm i send u_LightPosition as the plain translation vector. is this correct or should i multiply it by the view matrix before?
  • and finally: how to access the depth map in my lighting shader, i.e. how to get the correct uv coordinates?!


Personally, I’ve never even really looked at DPSM. Seems to me you’d be far better off looking into shadow cube-maps using the G80 and the upcoming R600. This new technology is insane; it’s here to stay; it’s much easier to use; it’s probably going to look better; and it requires no subdivision of your world geometry for reasonable results.

Check out the texture_array extension and the *_shader4 flavors as well.

P.S. Are those Doom3 characters?

yeah but i cannot afford a gf8 atm and i also cannot require such hardware for my project, so dpsm looks like an interesting solution.

can anyone who has already worked with this technique or who is familiar with the math behind it help me out?

BTW, Nvidia has a demo of a texture atlas approach that simulates hw shadow cubemaps. I haven’t played with that, but you might want to check it out.

ok as i got no further replies i tried to implement shadow cubemaps. so far i’ve managed to render the six views next to each other into one 3072x2048 texture. i’m not sure how to get them into a cubemap, however. can i have shadow cubemaps on a gf7 at all?

can i have shadow cubemaps on a gf7 at all?
See table 4.6 Texture Formats in this doc:
I think GF8 can, but haven’t found a link now.

ah crap… so what can i do? use a regular color cubemap and do the depth compararison myself in the fragment shader? i’ll lose filtering on nv cards then, right?

Cubemaps are ugly on GPUs that didn’t support them (6 renderpasses per lighsource).
Look at my project
The shadow gear is an included example. It shouldn’t be a problem to translate the scriptcode to another language…
your code looks a little similar to mine :smiley:

yeah i guess we both have the code from the tutorial at :smiley: however, it seems to work fine in your example, but not for me. i’m quite sure that i’m not reading the shadow map correctly in my fragment shader. you’re doing it for eight lights at the same time which is even more confusing. maybe we could pm or chat on icq, i’d greatly appreciate that :slight_smile: you seem to be the only one here who supports dpsm anyway.

ok as almost everyone told me that i should use cube maps instead of dual-paraboloid maps, i gave that a try; as you can see there are (at least) two issues left:

  • there are artifacts in the corners (probably caused by too little precision? i’m using 16bit floating point textures already, though)

  • the shadows are not consistent at the borders of the cube maps (see klesk’s feet)

i can solve #2 by reducing the projection’s angle from 90° to 85°, but then i get worse artifacts; or i can increase the depth offset (when rendering the shadow maps) a bit, this gets rid of the artifacts but then the inconsistence at the border gets worse.

i cant see anything wrong with the corners.

u will have issues at the border, (with GL_LINEAR) due to cubemaps not sampling from adjacent faces
but no way should they be as bad as what youre seeing.
it looks like youre creating the side + bottom SMs from different positions

see here

well i took a look at the rendered cube maps in gdebugger and i realized that adjacent faces are not consistent at the borders at all:

so there’s probably a problem with my projection matrix setup -does this look correct:

	void Matrix::ComputePerspectiveProjection( float fov, float aspect, float znear, float zfar, float infinite )
		float fTop = tan( fov*PiOver360 );
		float fBottom = -fTop;

		float fRight = fTop*aspect;
		float fLeft = fBottom*aspect;


		if( infinite )
			const float fInfinity = 0.1f;
			m_afData[0][0] = ( 2.0f*znear )/( fRight-fLeft );
			m_afData[0][2] = ( fRight+fLeft )/( fRight-fLeft );
			m_afData[1][1] = ( 2.0f*znear )/( fTop-fBottom );
			m_afData[1][2] = ( fTop+fBottom )/( fTop-fBottom );
			m_afData[2][2] = fInfinity-1.0f;
			m_afData[2][3] = znear*( fInfinity-2.0f );
			m_afData[3][2] = -1.0f;
			m_afData[0][0] = ( 2.0f*znear )/( fRight-fLeft );
			m_afData[0][2] = ( fRight+fLeft )/( fRight-fLeft );
			m_afData[1][1] = ( 2.0f*znear )/( fTop-fBottom );
			m_afData[1][2] = ( fTop+fBottom )/( fTop-fBottom );
			m_afData[2][2] = -( ( zfar+znear )/( zfar-znear ) );
			m_afData[2][3] = -( ( 2.0f*znear*zfar )/( zfar-znear ) );
			m_afData[3][2] = -1.0f;

for cubemaps, the projection matrix must be 90 degree FOV and 1.0 aspect ratio. i just use:
gluPerspective(90, 1.0, zNear, zFar)

btw… i have experimented a lot with DPSM and i didn’t like them. tessellating a large map takes forever(!), especially as you really have to subdivide a lot. DPSM demos have small maps and don’t tessellate quite as much as is really needed to avoid artifacts. the massive number of vertices makes it very vertex-limited.

also, i needed a 1024x1024 sized DPSM to match the shadow-edge quality of a 512 sized cube shadow map. that is 4 MB compared to 6 MB. not very much of a savings considering the massive number of vertices needed.

the only benefit for DPSM is larger batches, and the ability to use depth-textures.

if you really want, i can see if i still have any of my fragment-program code to sample a DPSM from a 3-component direction vector. maybe DPSM will work better for you?

d’oh… it really was a problem in my projection matrix setup - thanks! :slight_smile: works great now :smiley:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.