GLSL shader logic on AMD cards?

Our new renderer is displaying graphical artifacts in the cascaded shadow map implementation. This seems to occur near the dividing line between different shadow stages. It’s almost as if the fragments are divided into regions that aren’t completely independent from one another, and the shader logic gets confused if two neighboring fragments do a texture lookup on different textures. If I discard all but one stage, that stage will appear correctly with no artifacts.

Is it possible there some obscure GLSL rule or constraint I am unaware of, or is this a driver bug? It’s not a memory barrier issue.

The code that handles this looks like this:

		lightDir = vec3(normalize(lightmatrix[2].xyz));
		lightDir = normalize(lightmatrix[2].xyz);
		if (dot(lightDir, normal) > 0.0f) return lighttype;
		vec3 camspacepos = (CameraInverseMatrix * vec4(position, 1.0)).xyz;
		mat4 shadowmat;
		visibility = 1.0f;
		if (camspacepos.z <= 80.0)
			int index = 0;
			shadowmat = ExtractCameraProjectionMatrix(lightIndex, index);
			if (camspacepos.z > CameraRange.x + 10.0) index = 1;
			if (camspacepos.z > CameraRange.x + 20.0) index = 2;
			if (camspacepos.z > CameraRange.x + 40.0) index = 3;
			uint sublight = floatBitsToUint(shadowmat[0][index]);
			//shadowMapID = ExtractLightShadowMapIndex(sublight);
			mat4 shadowrendermatrix = ExtractLightShadowRenderMatrix(sublight); = (shadowrendermatrix * vec4(position, 1.0f)).xyz;
			shadowCoord.y *= -1.0f;
			ExtractLightInfo(sublight, coneangles, shadowkernel, shadowMapID);
			shadowCoord.xy /= coneangles;
			shadowCoord.xy += 0.5f;
			shadowrange = vec2(-500, 500);
			if (shadowCoord.x < 0.0f || shadowCoord.y < 0.0f || shadowCoord.x > 1.0f || shadowCoord.y > 1.0f) return lighttype;
#ifdef USE_VSM
			shadowCoord.z -= 0.05f;
			float samp = shadowSample( texture2DSampler[shadowMapID],, shadowrange, bias);
			shadowCoord.z = (shadowCoord.z - shadowrange.x) / (shadowrange.y - shadowrange.x);
			float samp = shadowSample(texture2DShadowSampler[shadowMapID],, shadowkernel).r;
			if (camspacepos.z > CameraRange.x + 70.0)
				samp = 1.0f - (1.0f - samp) * (1.0 - (camspacepos.z - 70.0) / 10.0);
			visibility = samp;
			attenuation *= samp;

Sounds like the usual cross-map CSM artifact caused by huge shadow map texcoord derivatives within a quad.

Several solutions (I know I’ve posted one here in the past), but the one I prefer is forcing all the pixels in a quad to sample from the same shape map split. Andrew Lauritzen style.

If you can’t find it in the archives let me know and I’ll get you a link.

Also note that you don’t need MIPs+trilinear filtering enabled to need consistent and reasonable texcoord derivatives across aquad. Anisotropic filtering + no MIPs needs them too.

Here’s one:

When I merged all shadow maps into a single texture the problem went away.

Oddly enough, it appears dynamically nonuniform texture samplers seem to cause no problems on AMD OpenGL drivers, but do cause rendering artifacts with their Vulkan drivers.