Varying variables cause massive framedrops

Hi,

I’m trying to implement a lighting shader to add “lamps” in the scene, and I’m having this problem, where it appears that every varying variable I use, to pass data from the vertex shader, to the fragment shader, reduces the framerate in ~15 FPS.

When I’m putting 3 “lamps”, it means passing 3 floats, and 3 vec3 varying variables, which reduces my FPS for almost an half.

I’m using a Lenovo T400, Windows7, with ATI Mobility Radeon HD 3400 Series, driver version 8.641.1.1000 (the latest version).

Any help would be appreciated,
Vince

Are you sure it is the varying usage and not the extra calculations in the fragment shader?

You can possibly pack the data into vec4’s to gain some advantage by using less varyings.

You could also past your shader here - if you are doing something weird or expensive (loops? branching?) that could cause your slowdown.

Well, here are both the VS and FS (I’ve stripped them to the bare minimum):

Vertex shader:

varying vec4 ambientGlobal;
varying vec3 normal;

varying vec3 lightDir0, lightDir1, lightDir2;
varying float dist0, dist1, dist2;

void main()
{
vec4 ecPos;
vec3 aux;

normal = normalize(gl_NormalMatrix * gl_Normal);

ecPos = gl_ModelViewMatrix * gl_Vertex;

// Light0
aux = vec3(gl_LightSource[0].position-ecPos);
lightDir0 = normalize(aux);
dist0 = length(aux);

// Light1
aux = vec3(gl_LightSource[1].position-ecPos);
lightDir1 = normalize(aux);
dist1 = length(aux);

// Light2
aux = vec3(gl_LightSource[2].position-ecPos);
lightDir2 = normalize(aux);
dist2 = length(aux);

ambientGlobal = gl_LightModel.ambient * gl_FrontMaterial.ambient;

gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
	
gl_Position = ftransform();

}

Fragment shader:

varying vec4 ambientGlobal;
varying vec3 normal;

varying vec3 lightDir0, lightDir1, lightDir2;
varying float dist0, dist1, dist2;

uniform sampler2D tex;
uniform sampler2D texLm;

uniform vec4 lightCol0, lightCol1, lightCol2;

uniform int addGlow, hasLightMap;
uniform vec4 glowColor;

void main()
{
vec3 n;
float NdotL;
vec4 color = ambientGlobal;
vec4 color2;

n = normalize(normal);

// Light0
NdotL = max(dot(n,normalize(lightDir0)),0.1);
color += NdotL / (dist0*lightCol0);

// Light1
NdotL = max(dot(n,normalize(lightDir1)),0.1);
color += NdotL / (dist1*lightCol1);

// Light2
NdotL = max(dot(n,normalize(lightDir2)),0.1);
color += NdotL / (dist2*lightCol2);

gl_FragColor = texture2D(tex,gl_TexCoord[0].st)*color;

if (hasLightMap == 1 && addGlow == 1)
{
	if (texture2D(texLm,gl_TexCoord[0].st) != vec4(0,0,0,0))
		gl_FragColor += glowColor;
}

}

I’m starting to think that it’s a driver issue. The reason, is that once in a while, the same code runs twice as fast (and smooth). I wonder if anyone ever experienced this kind of behavior ? The specs of the machine I’m using is in the original post.

There is quite a few things that you can do to optimize your code.
Somebody already gave you a good advice to pack the variables into arrays (which you unfortunately ignored ).
Secondly the bottom of the fragment shader ALWAYS exectute the expensive texture access. You normalize the normal twice, once in the vertex once in the fragment shader etc. etc. All these things have potential to seriously impede the performance on older videocards.
And lastly the fact that it sometimes runs fast may point that the problem is on the CPU side at some opengl calls.

You need to normalize twice as it is possible (and likely) for the normal to become ‘unnormalized’ as it gets interpolated between the vertices.

Regards
elFarto

Actually, if you want precise lighting you should only normalize in the fragment shader - interpolating an already normalized vector will give incorrect results.

As for speed - I suspect the “if statements” based on uniform values are not going to be fast. Would be much faster to compile out the different combinations of the shader (using #ifdefs etc)

You need to normalize twice as it is possible (and likely) for the normal to become ‘unnormalized’ as it gets interpolated between the vertices.

Regards
elFarto [/QUOTE]
I am a bit lost. Why do you need to normalize in the vertex shader ?

Really ? Why ?
I always thought it was safer to normalize first in vertex shader, then in fragment.

Normalize only in the frag-shader, if the per-vertex normals in all vertices used in this drawcall batch have uniform length k (i.e k=1.0 or k=1.7774) after transform. This is almost always the case.

Normalization in the vtxshader is necessary when that uniform length can’t be guaranteed: i.e vtx-texture-fetching from a lossy texture, skinmesh, vertex-morphs, or having a skewed normal-rotation-matrix.

Normalization in the frag-shader is always necessary, unless you’re flat-shading. (the triangle’s 3 post-VS vtxnormals point in the same direction and have length=1. )

I’ll try to explain with Maths,
(but beware I might be wrong)


vec3 n1,n2,n3; // post-VS normals of each vtx in the triangle


//-----[ gpu internals per fragment ]--------[
// in a nutshell
float w1,w2,w3; // w1+w2+w3=1.0;   each is in the range [0;1]

vec3 n_sentto_fragShader = w1*n1 + w2*n2 + w3*n3;
//-------------------------------------------/




void main(){ // frag sh
	float LenIn = length(n_sentto_fragShader);
	vec3 goodNormal = normalize(n_sentto_fragShader); // = n_sentto_fragShader/LenIn;
}

**********************************




//------[ maths constants ]---------------------------[
Let's create some constants:

const vec3 kNorm1,kNorm2,kNorm3; // some true normals, length=1.0. Just useful maths constants
const float kWg1,kWg2,kWg3; // kWg1+kWg2+kWg3=1.0
const vec3 kFragNorm1 = (kNorm1*kWg1 + kNorm2*kWg2 + kNorm3*kWg3);
const float kFLen1 = length(kFragNorm1); // will not be =1.0 in most cases, unless kNorm1=kNorm2=kNorm3

//----------------------------------------------------/




Test1:

	n1..n3 have length=1.0 , 
	n1 = kNorm1*1.0;
	n2 = kNorm2*1.0;
	n3 = kNorm3*1.0; 
	
	w1 = kWg1;
	w2 = kWg2;
	w3 = kWg3;
	
	==> n_sentto_fragShader = kFragNorm1;
	==> LenIn = kFLen1; 
	==> goodNormal = kFragNorm1/kFLen1;
	
	obviously, LenIn will become !=1.0  unless flatshading

	
Test2:
	n1..n3 have length 1.89 :
	
	n1 = kNorm1*1.89;
	n2 = kNorm2*1.89;
	n3 = kNorm3*1.89;
	
	w1 = kWg1;
	w2 = kWg2;
	w3 = kWg3;
	
	vec3 n_sentto_fragShader = w1*n1 + w2*n2 + w3*n3 = 
		= kWg1*kNorm1*1.89 + kWg2*kNorm2*1.89 + kWg3*kNorm3*1.89  = 
		= (kNorm1*kWg1 + kNorm2*kWg2 + kNorm3*kWg3)*1.89  = 
		= kFragNorm1 * 1.89;
	
	float LenIn = length(n_sentto_fragShader) = 
		= length(kFragNorm1 * 1.89) = 
		= kFLen1 * 1.89;
	
	vec3 goodNormal = (kFragNorm1*1.89) / (kFLen1 * 1.89) = 
		= kFragNorm1 / kFLen1; // !!! see, how uniform scale didn't bother us
	
	
	
	
	
Test3:
	(flat-shading)
	
	n1 = n2 = n3 = kNorm1;
	
	w1 = kWg1;
	w2 = kWg2;
	w3 = kWg3;
	
	vec3 n_sentto_fragShader = w1*n1 + w2*n2 + w3*n3 = 
		= kNorm1 * (kWg1+kWg2+kWg3) = 
		= kNorm1;
		
	float LenIn = length(n_sentto_fragShader) = 
		= length(kNorm1) =
		= 1.0;
		
	vec3 goodNormal = kNorm1;
		
	

Test4:
	(vertex morphing, non-uniform scale of normals)
	
	n1 = kNorm1* 1.4;
	n2 = kNorm2* 1.5;
	n3 = kNorm3* 1.6;
	
	w1 = kWg1;
	w2 = kWg2;
	w3 = kWg3;
	
	vec3 n_sentto_fragShader = (kNorm1*kWg1*1.4 + kNorm2*kWg2*1.5 + kNorm3*kWg3*1.6);
	
	float LenIn = unresolvable
	vec3 goodNormal = unresolvable
	
	

Note that I was talking about normalizing the light->vertex vector in the vertex shader - if you do this you will get different lighting directions across the polygon surface even if you normalize again in the fragment shader.
(you would only probably notice it on large polygons where a point light was near the surface)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.