Radeon Problem.

I have a vertex/fragment pair that compile fine on Geforces, but will not compile on a Radeon. I’ve tried it on a 9600 and an x800.

There’s some commented out stuff in there, but I thought it best to leave it just as it is.

  
varying vec4 eyePos;

void main(void)
{	
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_TexCoord[1] = gl_MultiTexCoord1;
	gl_TexCoord[2] = gl_MultiTexCoord2;
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	//gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_FrontColor = gl_Color;
	//gl_FrontColor = vec4(1.0, 0.0, 0.0, 1.0); // Hard-code red for testing purposes
	
	//Fog Stuff
	eyePos = gl_ModelViewProjectionMatrix * gl_Vertex;
	gl_FogFragCoord = abs(eyePos.z/eyePos.w);
}
  
uniform sampler2D bTexture,tTexture,lTexture;

varying float maxC;
varying vec4  tColor1,tColor2,lColor;



void main(void)
{
		
	//gl_FragColor = vec4 (0.5,1.0,1.0, 1.0) * gl_Color;
	
	//gl_FragColor =  vec4(inColor,1.0) ;
	
	//Percent = texture2D(tTexture, vec2(gl_TexCoord[1])).a;
	tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
	tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));
	//lColor = gl_Color;
	//lColor = texture2D(lTexture, vec2(gl_TexCoord[2]));
	lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);
	maxC=0;
	if (lColor.r>maxC)
		maxC=lColor.r;
	else if (lColor.g>maxC)
		maxC=lColor.g;
	else if (lColor.b>maxC)
		maxC=lColor.b;
	
	if (maxC>1)
		lColor = lColor/maxC;
	gl_FragColor = vec4((vec3(tColor1) * ( 1.0 - tColor2.a )+ vec3(tColor2) * tColor2.a)*lColor,tColor1.a);
	//gl_FragColor = gl_Color;
}

This should work: (Always spefic the .0 when you mean a float and you cannot write to varyings…)

 
uniform sampler2D bTexture,tTexture,lTexture;

float maxC;
vec4  tColor1,tColor2,lColor;



void main(void)
{
		
	//gl_FragColor = vec4 (0.5,1.0,1.0, 1.0) * gl_Color;
	
	//gl_FragColor =  vec4(inColor,1.0) ;
	
	//Percent = texture2D(tTexture, vec2(gl_TexCoord[1])).a;
	tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
	tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));
	//lColor = gl_Color;
	//lColor = texture2D(lTexture, vec2(gl_TexCoord[2]));
	lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);
	maxC=0.0;
	if (lColor.r>maxC)
		maxC=lColor.r;
	else if (lColor.g>maxC)
		maxC=lColor.g;
	else if (lColor.b>maxC)
		maxC=lColor.b;
	
	if (maxC>1.0)
		lColor = lColor/maxC;
	gl_FragColor = vec4((vec3(tColor1) *
                      ( 1.0 - tColor2.a ) + 
                       vec3(tColor2) * tColor2.a)
                       *lColor.rgb,tColor1.a);
	//gl_FragColor = gl_Color;
}
 

To elaborate a bit on the previous post.

In OpenGL Shading Language, you cannot write to a varying variable in the fragment shader. You may write to a varying variable in the vertex shader, and it will be interpolated and the interpolated value passed to the fragment shader.

It is also in the OpenGL Shading Language specification that you must use a decimal when assigning a value to a float. For example, “1” is an integer, while “1.0” is a float. If cross platform shaders is your goal, ATI’s compiler is much more strict and will give you a more portable shader. From what I see, you don’t have a Radeon problem, but a GeForce problem as it is accepting ill-formed out of spec code.

If you wish to continue with an nVIDIA card and compiler, you may want to double check your code with GLSL Validate , this is a simple parser that verifies shader code is to spec and therefore more likely to be able to be compiled on different video cards.

Yeah not having a Radeon kinda slows down debugging for it. So obviously I misinterpeted how to use varying.

Is the debugger in Shader Designer as good as GLSLvalidate? Cause I used that fix my shaders, before I was just doing it by notepad.

You can also use nvemulate to enable strict shader portability warning. This way, I found some non-portable statements in my shaders. You can also let the driver save the shader info logs, the combined shader source and the generated low level ARB_vp/vp shader into textfiles.

GLSL Validate checks your shader syntax, it is not a debugger. It takes very little time to use, as it is quite simple. Even if you use nvemulate with the strict flag, it still is a good idea to run the shader through GLSL Validate as the warning messages are often lacking information.

I’ve got a new problem with Radeons. The code works properly now, but the performance is terrible. On my Geforce 6600 I get like 150fps. On a Radeon x800 I get like 20. Clearly there is an issue. I’m not even sure where to start looking for that problem.

The problem seems to be the conditionals. Try to just compute one path. If the time is ok, try to check every path (again without conditionals).

Conditionals are poorly (or not) supported on ATI’s cards.

  
if (lColor.r>maxC)
   maxC=lColor.r;
else if (lColor.g>maxC)
   maxC=lColor.g;
else if (lColor.b>maxC)
   maxC=lColor.b;

can be quicker done that way:

maxC = max( lColor.r, lColor.g );
maxC = max( maxC, lColor.b );
if (maxC>1.0)
   lColor = lColor/maxC;

is quicker that way:

maxC = max(maxC, 1.0);
   lColor = lColor/maxC;

Avoid using conditionals. Always prefer built-in function like max,min,…like I showed you before.

happy coding!

  tColor1 = texture2D(bTexture, vec2(gl_TexCoord[0]));
	tColor2 = texture2D(tTexture, vec2(gl_TexCoord[1]));
	
	lColor = texture2D(lTexture, vec2(gl_TexCoord[2]))+(gl_Color-0.5);

These lines appear to be the source of the problem. I’m not sure why just yet.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.