# GLSL shader for replicating fixed func. pipeline

Hi all,
I am trying to replicate the OpenGL fixed func. pipe. (FFP) lighting in a glsl shader. here is what I am doing right now.

``````
#version 330

in vec2 vUV;
in vec3 vVertex;
in vec3 vNormal;

smooth out vec2 vTexCoord;
smooth out vec4 color;

uniform mat3 N;
uniform mat4 MV;
uniform mat4 MVP;
uniform vec4 lightPos;
uniform vec4 mat_ambient;
uniform vec4 mat_diffuse;
uniform vec4 mat_specular;
uniform float mat_shininess;

uniform vec4 light_ambient;
uniform vec4 light_diffuse;
uniform vec4 light_specular;
void main()
{
vTexCoord = vUV;
vec3 Nr = N*vNormal; //transform the normal vector by the normal matrix (inverse transpose of the modelview matrix)
vec4 eyePos = MV*vec4(vVertex,1);
vec3 L = (MV*normalize(lightPos-eyePos)).xyz;
vec4 A = mat_ambient*light_ambient;
float diffuse = max(dot(Nr,L),0.0);

vec3 R = normalize(2 * dot(L, Nr) * Nr - L);
vec3 V = L;

float RDotV = dot(R, V);
vec4 S = light_specular*mat_specular* max( pow(RDotV, mat_shininess), 0.);
vec4 D = diffuse*mat_diffuse*light_diffuse;
color = A + D + S;
gl_Position = MVP*vec4(vVertex,1);
}

``````

But I dont get the same result as the fixed func. code see the attchmt).
I think the problem is the view vector calculation. I am taking the Light vector as the view vector how do i get the eye vector. Note that eyePos var. in the code means the vertex position in eye space.
In eye space, the eye position is (0,0,0,1) right then is my view vector V=normalize(vce4(0,0,0,1)-eyePos);

Use 3DLabs ShaderGen. This tools can generate GLSL code that emulates FixFunc. The tool is no more available. But google can still find it. Search “shadergen glsl”.

Hi,
Thanks for the reply. I just adjusted my shader based on the output from the 3dlabs shadergen. So now the updated shader is this,

``````
#version 330

in vec2 vUV;
in vec3 vVertex;
in vec3 vNormal;

smooth out vec2 vTexCoord;
smooth out vec4 color;

uniform mat3 N;
uniform mat4 MV;
uniform mat4 MVP;
uniform vec4 lightPos;
uniform vec4 mat_ambient;
uniform vec4 mat_diffuse;
uniform vec4 mat_specular;
uniform float mat_shininess;

uniform vec4 light_ambient;
uniform vec4 light_diffuse;
uniform vec4 light_specular;

void main()
{
vTexCoord = vUV;
vec3 Nr = normalize(N*vNormal); //transform the normal vector by the normal matrix (inverse transpose of the modelview matrix)
vec4 esPos = MV*vec4(vVertex,1);
vec3 ecPosition3 = (vec3 (esPos.xyz)) / esPos.w;
vec3 eye = vec3 (0.0, 0.0, 1.0);

// Compute vector from surface to light position
vec3 L = lightPos.xyz - ecPosition3;

// Normalize the vector from surface to light position
L = normalize(L);
vec3 halfVector = normalize( eye + L);

vec4 A = mat_ambient*light_ambient;
float diffuse = max(dot(Nr,L),0.0);
float pf=0;
if (diffuse == 0.0)
{
pf = 0.0;
}
else
{
pf = max( pow(dot(Nr,halfVector), mat_shininess), 0.);

}
vec4 S = light_specular*mat_specular* pf;
vec4 D = diffuse*mat_diffuse*light_diffuse;
color = A + D + S;
gl_Position = MVP*vec4(vVertex,1);
}

``````

But even then the output is wrong. I have removed the textured completely. Now the outputs from FFP and shader are as follows. The output is worse then before There is one more thing I want to ask. The shadergen code divides the eye space vertex position (esPos) with the w coordinate. Why is it so that only the vertex position is divided by w? and then the eye is given a vector (0,0,1) the reason for which is that in eye space the eye is at the origin and it is looking down the -Z axis. So the vector pointing to the eye is (0,0,1) correct?

OK problem solved. The normal matrix was calculated wrong. I disabled my custom matrix library and used glm and voila the output is exactly like FFP.

I really didnot understand the need for dividing the eye space vertex pos with the w coord why is it done here. If i remove this division the output is still the same?

Thanks for the help guys.

I really didnot understand the need for dividing the eye space vertex pos with the w coord why is it done here. If i remove this division the output is still the same?

Because that’s just how it works. The W column exists for homogenous coordinates. Camera space can have a non-1 W, if you were doing some kind of projection operation in your model-to-camera matrix. Since you, and most people, are not, it doesn’t matter.

Thanks Alfonso for ur input.

Regards,
Mobeen

mobeen, try tvmet for the matrix library, it gives very good perf with C++.

Thanks ugluk, I will give it a try.