# Taking multiple samples in cubemap shadowmaps

I got cubemap shadowmaps working, and I like them because they take a lot less instructions than my flat unwrapped fake cubemaps. The lookup is just a vec4.

However, the same characteristic that makes it so attractive creates a new problem…how the heck do I sample adjacent pixels? It’s easy with a 2D texture because I can just add 1/texturesize to get the next pixel’s coordinates. With cubemaps, I don’t know which face is being used, so this makes it a lot harder. Any suggestions?

Well, PCF shadows are actually “fake” anyway. So it’s not really critical to sample adjacent pixels; the general idea is to sample “some area of the pixels”.

So what you could do is to jitter the lookup vector “slightly” for each sample. Now, how to do that is a good question. You could just add a bunch of small offsets to the direction (no need to normalize afterwards). This is easy, but samples will be distributed in a sphere around the vector’s end point, whereas ideally you would want them to be distributed in a small disk perpendicular to the vector. This could be done by rotating the vector by some small angles (which is obviously more complex than just adding some offsets).

From here:

http://developer.nvidia.com/object/cube_map_ogl_tutorial.html

For cube map texturing, the (s,t,r) texture coordinates are treated as a direction vector (rx,ry,rz) emanating from the center of a cube. (The q coordinate can be ignored since it merely scales the vector without affecting the direction.) At texture application time, the interpolated per-fragment (s,t,r) selects one of the cube map face’s 2D mipmap sets based on the largest magnitude coordinate direction (the major axis direction). The target column in the table below explains how the major axis direction maps to the 2D image of a particular cube map target.

``````major axis
direction     target                              sc     tc    ma
----------    ---------------------------------   ---    ---   ---
+rx          GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT   -rz    -ry   rx
-rx          GL_TEXTURE_CUBE_MAP_NEGATIVE_X_EXT   +rz    -ry   rx
+ry          GL_TEXTURE_CUBE_MAP_POSITIVE_Y_EXT   +rx    +rz   ry
-ry          GL_TEXTURE_CUBE_MAP_NEGATIVE_Y_EXT   +rx    -rz   ry
+rz          GL_TEXTURE_CUBE_MAP_POSITIVE_Z_EXT   +rx    -ry   rz
-rz          GL_TEXTURE_CUBE_MAP_NEGATIVE_Z_EXT   -rx    -ry   rz
``````

Using the sc, tc, and ma determined by the major axis direction as specified in the table above, an updated (s,t) is calculated as follows

``````s   =   ( sc/|ma| + 1 ) / 2
t   =   ( tc/|ma| + 1 ) / 2
``````

If |ma| is zero or very nearly zero, the results of the above two equations need not be defined (though the result may not lead to GL interruption or termination). Once the cube map face’s 2D mipmap set and (s,t) is determined, texture fetching and filtering proceeds like standard OpenGL 2D texturing.

So with the above you could potentially reverse it - get the offset you want, add to s,t, then re-generate a vector?

I am more in favor of the jitter cloud approach NeARAZ proposed, it’s way faster than the other method and has the possibility of selecting from multiple cubemap sides.

I guess you could also extract the tangent/bitengent and work with those.

So with the above you could potentially reverse it - get the offset you want, add to s,t, then re-generate a vector?

That sort of defeats the purpose of using cubemaps over a flat texture in the first place.

It sounds like a simple problem at first. “Oh, I’ll just offset the vector a little bit.” But then you start thinking more specifically about it and it gets weird. Right now I am thinking I will offset the vector by a 3D coordinate based somehow on the vector…like (1-x,1-y,1-z) or something.

Well, I took the easy way out and just did this, and it looks fine:

``````		vec4 cubecoord = vec4( lightdir, max(max(abs(lightdir.x),abs(lightdir.y)),abs(lightdir.z)) );
float zFar = gl_LightSource[0].linearAttenuation;
float zNear = 0.1;
float a = zFar / ( zFar - zNear );
float b = zFar * zNear / ( zNear - zFar );
cubecoord.w = a + b / cubecoord.w;

#endif

float ps = 8.0 / gl_LightSource[0].quadraticAttenuation;
#endif

float ps = 8.0 / gl_LightSource[0].quadraticAttenuation;
#endif
``````

Well, you could create an orthonormal basis for your direction vector, in which the direction vector points along the +Z axis.

Then, you can create samples on the hemisphere around the +Z axis and transform/rotate/align them properly with respect to the direction.

Creating samples on the hemisphere could be done e.g. by creating poisson samples on the 2D disk and projecting them up to the hemisphere. There is some literature out there about how to its, e.g. http://www.cs.virginia.edu/~gfx/pubs/antimony/

``````
matrix3x3 transform = CreateOrthoNormalBasis(direction);

float coverage = 0.0f;
for each sample s
{
float2 sample_2d = CreateDiskSample2D(s);
float3 sample_3d = float3(sample_2d, 1 - sqrt(length(sample_2d)));
float3 transformed_sample = Mul(transform, sample_3d);
}

``````
``````template < typename T>
void OrthonormalBasis(const Vector3<T>& v1, Vector3<T>& v2, Vector3<T>& v3)
{
if(fabsf(v1.x > v1.y))
{
T inv_length = 1 / sqrt(v1.x * v1.x + v1.z * v1.z);
v2 = Vector3<T>(-v1.z * inv_length,0, v1.x * inv_length);
}
else
{
T inv_length = 1 / sqrt(v1.y * v1.y + v1.z * v1.z);
v2 = Vector3<T>(0,v1.z * inv_length,-v1.y * inv_length);
}
v3 = Cross(v1,v2);
}

``````

Is there a sample code base to pull this from? I understand how to use it in GLSL, but am not quite sure how to set it up in the actual C++. What goes in the texture matrix?

I’ve been trying to find some demo code on this, but have only seen the RFC from nvidia, which state how shadowCube is called.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.