Odd GLSL Problem


I seem to have come across an extremely odd problem in GLSL. I’ve managed to simplify the problem to multiplying 2 textures together. Here are some screenshots to help explain (note the gl_FragColor line):

texture1 * texture2

Both texture1 and texture2 are fine by themselves, but multiplying them together, they go completely transparent (and it’s nothing todo with the alpha component, as the original error occurred when doing texture1.rgb * texture2.rgb, with the alpha component set to 1.0).

Does anyone have any ideas why this is occurring? I’m running a GF6800GT, Driver version 91.47 (altho I’ve seen it on earlier versions), Windows 2000.

I originally asked the question over at GameDev , tried a few things, but nothing seemed to worked.

Thanks & Regards

A shader would be useful in diagnosing the problem.

The shaders are shown in the screen shots…

In the final broken screen shot, there seems to be a outline, is this from the broken render or another render call?

Just some random things to try:

  • Make sure alpha test and alpha blend are disabled.
  • Try binding some different textures - perhaps some texture states are messing up?

The shaders are shown in the screen shots…
My fault.

Try this fragment shader, explicitly setting the alpha directly:

vec4 temp = texture1 * texture2;
temp.a = 1.0f;
gl_FragColor = temp;

If this fixes it, you must have some alpha going on in one of those textures, and you’re alpha blending or alpha testing (probably accidentally).

I actually had an issue with a quadro fx4500 where a simple ARBfp program (texture fetch and output, maybe one other instruction) was somehow enabling what could only be described as alpha testing…despite explicity disabling anything that could result in a fragment being killed immediately prior to the draw call. Setting alpha output to any value greater than zero in the fragment program caused the geometry to appear, anything else caused nothing to be rendered. This seemed to involve a particular texture/texture unit fetch, as it worked when sampling from a different texture unit. I don’t remember the driver number but it was pretty recent, maybe the same as elFarto.

Thank you for your replies.

I tried Korval’s suggestion, but to no avail. It’s definitely something todo with texture1 (which comes from the cubemap) as any operation (-, +, / or *) that involves texture1, causes the problem. The odd thing is, texture1 looks fine by itself.

The outline in the final screenshot is from a different render call.

Here is the assembler output from the shader in the final screenshot, dumped using NVemulate:

OPTION NV_fragment_program2;
# cgc version 1.6.0000, build date Aug 11 2006 19:59:24
# command line args: 
#vendor NVIDIA Corporation
#profile fp40
#program main
#semantic sampler0
#semantic sampler3
#semantic sampler4
#semantic sampler1
#semantic sampler2
#var float4 gl_TexCoord[0] : $vin.TEX0 : TEX0 : -1 : 1
#var float4 gl_TexCoord[1] :  :  : -1 : 0
#var float4 gl_TexCoord[2] :  :  : -1 : 0
#var float4 gl_TexCoord[3] :  :  : -1 : 0
#var float4 gl_TexCoord[4] :  :  : -1 : 0
#var float4 gl_TexCoord[5] :  :  : -1 : 0
#var float4 gl_TexCoord[6] :  :  : -1 : 0
#var float4 gl_TexCoord[7] :  :  : -1 : 0
#var float4 gl_FragColor : $vout.COLOR : COL : -1 : 1
#var float3 camera :  :  : -1 : 0
#var sampler2D sampler0 :  :  : -1 : 0
#var float3 normal :  :  : -1 : 0
#var float3 lightDir :  :  : -1 : 0
#var sampler2D sampler3 :  :  : -1 : 0
#var sampler2D sampler4 :  :  : -1 : 0
#var float3 reflection : $vin.TEX1 : TEX1 : -1 : 1
#var samplerCUBE sampler1 :  : texunit 0 : -1 : 1
#var sampler2D sampler2 :  : texunit 1 : -1 : 1
PARAM c[1] = { program.local[0] };
OUTPUT oCol = result.color;
TEX   R1, fragment.texcoord[0], texture[1], 2D;
TEX   R0, fragment.texcoord[1], texture[0], CUBE;
ADDR  oCol, R0, R1;
# 3 instructions, 2 R-regs, 0 H-regs

Thanks & Regards

Do you set correct values to the sampler uniforms (the are set to zero initially)?

Originally posted by Komat:
Do you set correct values to the sampler uniforms (the are set to zero initially)?
Yes, the texture shows fine by itself (see the first image in my inital post), it’s a bit hard to see though.


Originally posted by elFarto:
Yes, the texture shows fine by itself (see the first image in my inital post), it’s a bit hard to see though.

Since all samplers are initially set to zero and one texture unit can have simultaneously bound several textures of different types (2d, cube, 3d,…), it might work with one used texture even if the samplers were not set correctly however it is error if one texture unit is addressed trough different texture types (2d and cube) from single shader and offending rendering will fail with INVALID_OPERATION error.

Ok, thanks for all your help. I have discovered the problem, and it’s completely unrelated to the texture unit.

The program I’m working on is to display the models from a game (EVE Online). Todo this, it extracts the shader information from the game’s files and converts it from DirectX texture stage operations into a GLSL shader.

The shader I posted above isn’t the shader that it tries to compile initially, as my code was generating an invalid shader (something like foo = vec4(1.0, 1.0)). I stripped out all the (I thought) irrelevant lines. After I fixed my application, the problem went away.

Thanks for all your help, even if the problem was nothing todo with OpenGL/drivers.