I’m encoutering a strange bug on an nVidia card while trying to make an input-dependent texelFetch from a multisample texture in a geometry shader.
The shader has the following input:
flat in ivec3 iGBufferCoord;
… and a 2D multisample texture sampler:
uniform sampler2DMS sNormal;
Now, in the main function of the geometry shader, I have the following:
vec3 normal = texelFetch( sNormal, iGBufferCoord.xy, iGBufferCoord.z ).xyz;
This will always behave as if iGBufferCoord.z was 0, even when forcing it to some other value from the vertex shader (so I’m sure the code was not feeding 0’s for this value).
It looks to me like a broken optimization for compile-time constant expressions.
Is there a pragma on nVidia to disable such optimizations that I suspect are the culprit?
Note that at some other places in my shaders I iterate over sample indices and fetch texels, and that works just fine (when sample index is a local variable). If I copy gBufferCoord.z to a local variable then use that as layer index, it still doesn’t work.
Any help greatly appreciated.