I have an offline shadowmap pre-stored in Texture with format GL_R16, and i want to use sampler2DShadow to implement PCF. It works all fine in Adreno, but goes wrong in mali.Can anyone tell me the reason? and what format of Texture can also support sampler2DShadow?I tried GL_R16F,it not work even in Adreno.
Not sure about the specifics on Adreno, Mali, or OpenGL ES in general.
But traditionally in OpenGL IIRC, sampler2DShadow and hardware depth texture compares (GL_TEXTURE_COMPARE_MODE = GL_COMPARE_R_TO_TEXTURE / GL_COMPARE_REF_TO_TEXTURE) for texture lookups were only supported for a texture that has type GL_DEPTH_COMPONENT or GL_DEPTH_STENCIL) (so a texture with genuine depth). Browing the OpenGL 4.6 spec, I think that’s still the case (see section 220.127.116.11 Texture Access).
UPDATE: Yep. Browing the OpenGL ES 3.2 spec, it appears that the same is true there. See section 8.20.1 Depth Texture Comparison Mode. You’re not even in “undefined behavior” land there. It explicitly says you won’t get texture depth compares with an incompatible texture format:
For depth? Sure. Any internal texture format with a base internal format of GL_DEPTH_COMPONENT or GL_DEPTH_STENCIL which is supported by your hardware. For instance, see the ones listed in Table 8.11 in the OpenGL ES 3.2 spec:
I checked the table, but still be confused about how to store depth in texture rather than frame buffer. Maybe I should stored data in unsigned short and use following API?
GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, 800, 600, 0,
GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, TexData);
Just look for one of the many online or print tutorials on doing Shadow Mapping with OpenGL. They show how to create a depth texture, bind it as the depth buffer of a framebuffer object (FBO), draw a scene to it, and then bind that same depth texture to a shader sampler so they can read the depth values in it in the shader in a subsequent render pass that applies the generated depth/shadow map to the scene.
It should basically be exactly what you have now – but just plugging in a proper depth texture in place of the GL_R16 texture.
If you read a depth texture value into the shader via a standard sampler2D sampler (with HW depth compares disabled), then you get normalized 0…1 depth values back. Of course you can instead do what you’re trying to do and let GL do the depth compares (and possibly additional PCF filtering) behind-the-scenes by using a sampler2DShadow and enabling HW depth compares.
As far as other formats… If you only need 16-bit greyscale, PNG will handle that. For 32-bit float or 32-bit int textures, take a look at OpenEXR. Doubtless there are other predefined formats to choose from. But I’d lean toward KTX since it’s oriented toward storing all OpenGL and Vulkan textures.
For initially for testing though, just roll your own and make it simple.