Problems using multiple texture2DProj() from depth

I’m rendering to two depth textures using two FBOs. The texture images look OK. Texturing with either texture image looks as expected (one is coarser than the other). However, if I try to texture using both images, using texture2DProj() using two varyings for texture coordinates, and two texture units, then only the first texture seems to work right.

Is this another NVIDIA driver bug, or am I crazy? With GL, there’s no PIX for Windows, and no REF driver, so it’s harder to tell than under Direct3D.

Here’s my vertex shader and fragment shader. Not particularly advanced…

// vertex shader

varying vec3 N;
varying vec4 P0;
varying vec4 P1;

uniform mat4 world;
uniform mat4 worldViewProjection;
uniform mat4 lightViewProjection0;
uniform mat4 lightViewProjection1;

attribute vec3 position;
attribute vec3 normal;

void main(void)
  gl_Position = worldViewProjection * vec4(position, 1);
  vec4 temp = world * vec4(position, 1);
  P0 = lightViewProjection0 * temp;
  P1 = lightViewProjection1 * temp;
  N = (world * vec4(normal, 0)).xyz;

// fragment shader

varying vec3 N;
varying vec4 P0;
varying vec4 P1;

uniform vec3 lightDirection;
uniform sampler2D lightDepthTexture0;
uniform sampler2D lightDepthTexture1;

void main(void)
  float c = 1.0;//texture2DProj(lightDepthTexture0, P0).x;
  c = c * texture2DProj(lightDepthTexture1, P1).x;
  c = c * 0.9 + 0.1;
  float b = dot(N, lightDirection) * c + 0.1;
  gl_FragColor = vec4(b, b, b, 1);

I’m using drivers 163.71 on a GeForce 7400m on Windows XP SP2 on a Sony VAIO SZ-140P laptop.

Btw: the problem also happens if I use texture2D().

I have a hard time believing it’s my code, as it works as expected when I comment out either the first or the second texture2D() line. Finally, the binding code for the texture is as follows:

    static inline void apply(int v, RenderTarget *rt) {
      if (v != -1 && rt != NULL)
        glActiveTexture(GL_TEXTURE0 + v);

“v” is the index of the uniform, and I get indices 0 and 1 when I’m using two textures, for the respective targets.

void RenderTarget::BindTexture(bool depth)
  if (depth)
    if (!depthTexture_)
      throw std::exception("Attempt to bind a render target that is not a texture, as a texture");
    glBindTexture(GL_TEXTURE_2D, depth_);
    if (dirtyDepth_)
      dirtyDepth_ = false;

And, finally, screen shots to show the problem:

  1. the problem with the second map can be seen on the yonmost tile (a big black blob)

  1. the coarser shadow map that reaches that far is rendered alone (without the first finer shadow map)

  1. the finer shadow map is rendered alone.

You’re not confusing uniform location with sampler index, are you?
You have to assign samplers to texture units yourself with glUniform1i(), otherwise both lightDepthTexture0 and lightDepthTexture1 will point to image unit 0 by default.

I’ve been too long in HLSL land :slight_smile:

There were more basic problems, like I was using texture2DProj instead of shadow2DProj, and sampler2D instead of sampler2DShadow. Sadly, the NVIDIA compiler or runtime didn’t even warn about that – according to the spec, binding a depth compare texture to a regular texture sampler results in “undefined” behavior, so they’re in their right, but I’d prefer to get at least a warning for that…

The use of sampler2D was not a problem in you fragment shader since you used texture2DProj. The nature of the texture you bound to this sampler is up to your application not the shader, that’s why IMHO the compiler did not flinched.