Depth buffer texture access

Has anyone any ideas why glsl always returns 0.0 when i try to access a 32 bit floating point depthbuffer texture with GLSL?

When I read out the depthbuffer with glReadPixels into a float array I seem to get correct values in the range [~0.99, 1]. No texel is actually 0.0.


float referenceDepth = texture(depthBuffer, vec3(oldFragCoord.xy , 0.0));
float test = referenceDepth==0.0? 0.0:1.0;
result = vec4(test);  //fragment shader output

-> I use 0=near, 1=far and depthclear value is 1.0.
-> TEXTURE_COMPARE_MODE is GL_NONE.
-> accessing non-shadow samplers works fine with the used coordinates so they seem correct and all are in [0,1]

Also, on my old GF9800 it worked fine, but not on the new system (GF560ti, driver 280.26, Win7 64, OpenGL 3.3 compatibility context)

accessing non-shadow samplers works fine with the used coordinates

So are you attempting shadowmapping with the 32-bit flot depthtexture?

if so, your 3rd texture coord seems odd as it’s just a constant 0.0. Usually for shadowmapping the stpq tetxure coordinates are derrived by multiplying the vertex geometry by the light’s modelview matrix and dividing by q. Therefore the p coordinate can never be 0.0.

Have you tried using the regular depth texture format (24-bit unsigned integer)? May be it’s a driver bug with the float format?

Make sure “depthBuffer” is not a sampler2DShadow, but a sampler2D. And “glBindTexture(GL_TEXTURE_2D, uiDepthTex)”

Wow, you saved my weekend, thank you! Using a sampler2D really works - but why? And why does using a sampler2Dshadow not work (on my GF9800 it does…)? What exactly is the difference between acessing a sampler2DShadow and a sampler2D? I thought i must use the shadow sampler when i have a DEPTH_COMPONENT_* texture and with GL_TEXTURE_COMPARE_MODE set to GL_NONE it would simply give me a depth value?

@BionicBytes: You have correctly deduced that i am not attempting to do shadowmapping :wink: I am doing some gpgpu stuff, I forgot to point that out, sorry.

I thought i must use the shadow sampler when i have a DEPTH_COMPONENT_* texture and with GL_TEXTURE_COMPARE_MODE set to GL_NONE it would simply give me a depth value?

Not according to the spec. You must do it all of one way or all of the other. Either it’s a shadow with a non-GL_NONE comparison, or its a non-shadow with a GL_NONE comparison. Your GF9800 simply implemented off-spec behavior.

But my GLSL spec 3.30.6 says in chapter 8.7 (“texture lookup functions”):

For shadow forms […], a depth comparison lookup on the depth texture bound to ‘sampler’ is done as described in section 3.8.16 ‘Texture Comparison Modes’ of the OpenGL Graphics System Specification.

And there (it is actually chapter 3.9.17 in the GL 3.3 compatibility spec) it says in paragraph “Depth Texture Comparison Mode” that

If the value of TEXTURE_COMPARE_MODE is NONE, then r=D_t.

so no comparison…

To what part of the spec are you referring or am i misinterpreting sth?

I’m referring to page 187 of the OpenGL 3.3 core specification. At the bottom of the page, it lists the things that will cause texture accesses to be undefined, and the first two are exactly what I said:

I see. So,

has to be interpreted as

“For shadow forms […], a depth comparison lookup on the depth texture bound to ‘sampler’ is done IN ANY CASE AND IS PERFORMED IN A WAY described in section 3.8.16”

So basically depth texture access is only controlled by GL_TEXTURE_COMPARE_MODE for the FF pipeline and when using shaders i have no choice but to set it to none/compare_to_ref according to wether i do not use/do use a shadow-sampler type, right?

I see. So,

has to be interpreted as

No. What I’m saying is that the GLSL spec doesn’t say everything. It only defines the language. What the words mean in a shader.

Similarly, the GLSL spec says nothing about where vertex inputs come from (glVertexAttribPointer) or where fragment outputs go (glDrawBuffers) or anything of that nature.

What the GLSL spec is saying is that Shadow samplers are expected to be accessed according to the texture comparison logic. Whether it is legal to turn that texture comparison logic off is not governed by the GLSL spec, because that is not the function of the GLSL language.

The GLSL specification is a supplement to the OpenGL specification. You cannot fully understand everything you have to do in OpenGL if you only read the GLSL spec.

So basically depth texture access is only controlled by GL_TEXTURE_COMPARE_MODE for the FF pipeline and when using shaders i have no choice but to set it to none/compare_to_ref according to wether i do not use/do use a shadow-sampler type, right?

Your words got confusing there towards the end, but the OpenGL spec is very clear: there must be consistency between your shader and your texture/sampler object. Just as your sampler type in GLSL must match the OpenGL texture object type (which, BTW, the GLSL spec does not say. It only says that typed samplers allow consistency checking), you must match your Shadow usage to your texture comparison mode usage.

Yes it is. On p.91, chapter 8.7 of the GLSL spec it says

So it seems to me that the GLSL spec at least makes restrictions on what GL_TEXTURE_COMPARE_MODE may be to keep the GL/GLSL interface working.

However, let me further explain my “confused words”:
In the GLSL spec it says that for shadow forms texture a depth comparison lookup is done as described in the GL spec “Texture Comparison Modes”. So i looked it up there and found that the output of a texture unit is defined through

  1. GL_DEPTH_TEXTURE_MODE
  2. GL_TEXTURE_COMPARE_MODE
  3. GL_TEXTURE_COMPARE_FUNC.

So i thought (at first) “hey, i set GL_TEXTURE_COMPARE_MODE to GL_NONE and the texture unit outputs the depth value instead of 0 or 1 and that would be what i want”. I think this would also have complied with what you said, that GLSL spec just says that texture(…) takes over values from the texture unit which in turn operates according to the GL spec.
However, obviously i missed the part in both the GL spec (pointed out by you) and the GLSL spec (pointed out by me in this post) that specify that sampler type and compare mode must match.

So i asked myself in what case it would make sense to even have the GL_TEXTURE_COMPARE_MODE state FOR DEPTH TEXTURES (since it is depth texture if it can be derived in a 1:1 manner from the sampler type? This way i thought maybe there is some FF funcionality that requires changing the compare mode… although i am not sure what it might be…

I’ve wondered about that as well. That is, for a texture of base type GL_DEPTH_COMPONENT, you either have:

  1. the depth compare (GL_TEXTURE_COMPARE_MODE) is enabled, AND
  2. sampler*Shadow sampler type in shader.

OR, you have

  1. the depth compare (GL_TEXTURE_COMPARE_MODE) is “disabled”, AND
  2. “non”-Shadow sampler type in shader.

It’s a equivalence relation. One completely determines the other.

I don’t know the real reason, but my best guess is along your FF thinking. Depth comparison is “texture unit” state (has been since even before shaders), as is PCF filtering which is stacked on top of that. Whereas the sampler type used in the shader is “shader” state.

You’ve always had to configure the fixed-function texture unit hardware (samplers) which handles texture sampling and filtering for depth compare, even before we had shaders.

To obsolete this need to set depth compare on the texture unit would require that binding a shader reconfigure the texture units, and my guess is that they didn’t want to impose that overhead. I mean, it doesn’t alleviate us from being conscious of which textures we bind (the types have to match, etc.) Besides, we devs can just set this texture depth compare state once on the texture (or sampler state obj) and be done with it. It also has implications for supporting the compatibility profile. But that’s just a guess, not an answer.

All this is not that dissimilar from having to bind int/uint textures and using the int/uint sampler types in the shader, 2D array textures with 2D array sampler types, multisample textures with multisampler texture types, cube map arrays with depth compare enabled to cube map array shadow samplers, etc. It just goes beyond the type of the texture to a dynamic texture attribute.

Indeed. Scrolling through the sampler types in the glsl spec there is lots of gl state capsuled in sampler types.
Although these relations between the gl state and the glsl samplers are not always bijective (e.g. you can access float or normalized integer textures with a float sampler type) i wonder if it wouldn’t be possible to generate some kind of error indicator (GL_ERROR …) at shader runtime if s.th. does not match (texture unit bound texture type and sampler type, shadow type and compare mode…). Of course this might have a great negative impact on performance and checking at shader link time is not possible because the gl state can still change - however, an explicit glTestShader would do just fine… :wink:

Have you checked the ATI driver behavior? IIRC, ATI driver does do sampler type compatibility checking here in glValidateProgram (then just query GL_VALIDATE_STATUS to get the error message). Wouldn’t be surprised if they also check shadow/depth compare compatibility here too, but I don’t know if they do.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.