Normal mapping

Hi everyone,

I implemented a simple point light with normal mapping on RenderMonkey and copied it over to my own framework in C++ and for some odd reason I am having issues as you can see below:


First of all it’s not a spotlight, though it looks like it. The normal map seems fine as the highlights are correct.

I am doing my light and view direction vector calculations in eye space (premultiplying the light position and view position in the application by the current view matrix) however; I get this strange result. There is not a problem in the calculation of the view matrix. I believe I have checked everything but I hope I can get some input here or maybe someone can suggest somewhere else to look for the source of the problem.


Vertex Shader

uniform vec3 eyePosition;	/*In eye space*/
uniform vec3 lightPosition; /*In eye space*/

varying vec3 lightDirection;
varying vec3 viewDirection;
varying vec2 texcoord;
varying float d;

attribute vec3 tangent;
attribute vec3 bitangent;

void main(void)
	gl_Position = ftransform();
	vec4 vertPosition = gl_ModelViewMatrix * gl_Vertex;
	texcoord 		 = gl_MultiTexCoord0.xy;
	vec3 normal 	 = gl_NormalMatrix * gl_Normal;
	vec3 esTangent   = gl_NormalMatrix * tangent;
	vec3 esBitangent = gl_NormalMatrix * bitangent;
	vec3 lightVector = lightPosition -;
	float l = length( lightVector );
	float k0 = 0.0001;
	float k1 = 0.0001;
	float k2 = 0.0002;
	d = 1.0 / ( k0 + k1 * l + k2 * l * l );
	lightDirection.x = dot( esTangent, lightVector );
	lightDirection.y = dot( esBitangent, lightVector );
	lightDirection.z = dot( normal, lightVector );
	vec3 viewVector = eyePosition -;
	viewDirection.x = dot( esTangent, viewVector );
	viewDirection.y = dot( esBitangent, viewVector );
	viewDirection.z = dot( normal, viewVector );

Fragment Shader

varying vec3 lightDirection;
varying vec2 texcoord;
varying vec3 viewDirection;
varying float d;

uniform sampler2D texture0;
uniform sampler2D texture1;

void main(void)
	lightDirection	= normalize( lightDirection );
	viewDirection	= normalize( viewDirection );
	vec3 normal		= normalize( texture2D( texture1, texcoord ).xyz * 2.0 - 1.0 );
	float dotNL 	= max( 0.0, dot( normal, lightDirection ) );
	vec3 reflection = normalize( ( ( 2.0 * normal ) * dotNL ) - lightDirection );
	float dotRV		= max( 0.0, dot( reflection, viewDirection ) );
	vec4 base 		= texture2D( texture0, texcoord );
	vec4 ambient 	= vec4( 0.1, 0.1, 0.1, 1.0 ) * base;
	vec4 diffuse 	= 0.5 * vec4( 1.0, 0.5, 0.5, 1.0 ) * base * dotNL * d;
	vec4 specular	= 10.5 * vec4( 1.0, 0.5, 0.5, 1.0 ) * pow( dotRV, 10.0 ) * dotNL * d;
	gl_FragColor 	= vec4( + +, 1.0 );

Maybe there is still something you don’t understand:

uniform vec3 eyePosition;	/*In eye space*/

in eye space, coordinates are relative to the eye so in eye space, the eye position is always at (0,0,0).
So, when you transform a vertex to eye space multiplying this one by the modelview matrix, its position is relative to the eye position.

Thanks for pointing that out but beyond that even the diffuse lighting is wrong which is not dependent on the view direction.

I red-green-blue line represent tangent-binormal-normal?
If yes, they are wrong, the normal should always “point” away from the face.
If the normal is the green one then the shader is wrong cause you compute the z coordinate with the normal. :stuck_out_tongue:

BTW you can avoid to pass the binormal as parameter but you can simply compute it using a cross product.

R-tangent, G-normal, B-bitangent

I calculate the binormal using cross product anyway but to be honest I don’t know whether passing a vec3 or doing a cross product in the shader is more efficient.

I don’t know what you meant by “…If the normal is the green one then the shader is wrong cause you compute the z coordinate with the normal…” the last row of TBN matrix is the normal so when you multiply a vector by that matrix you do a dot with the last row for the z coordinate.

“I calculate the binormal using cross product anyway but to be honest I don’t know whether passing a vec3 or doing a cross product in the shader is more efficient.”

GPU performs math operations insanly fast whereas (depedning on geometry) sending lots of binormals (Vec3) down the pipeline is very inefficient and takes more video ram to store. its once per-vertex anyways.


Uniforms variables are shader program objects and do ont belong to a vertex or a fragment shader. So I would think it very stupid to send a uniform everytime the vertex or the fragment shader is invoked. IMO, it is at least cached since uniforms are per-primitive “properties/attribute”.
So, in the end, I do not know if passing binormal as uniform is faster or slower than computing it for each vertex.

dogdemir, right now, I do not have the time to look deeply at your code, but to support what Rosario said, the vertical vector is eye space is always the Y axis, the one orthogonal and pointing out of the screen is the Z axis. Maybe, you had not taken care of that.

Sorry, my English is still awful (I’m working on it). :frowning:

You are using the Normal to compute the Z, the Tangent to compute the X, so the Binormal is inverted (you are using a left-handed coordinate system), check it again.

I think there’s some confusion in the TBN matrix cause is the only difference that I found with my shader (edit: and the obvious eyePosition that is wrong).

@dletozeun: bitangent are vertex attributes, not uniform. :expressionless:

i guess you mean not uniforms but attribs which are part of the vertex stream which has to undergo this and that and requires some calls to set it up plus consumes more memory which can be used for something else. imho cross product once per vertex rules.
not much time to look through the code either but important to say that giving a guess the transformation is wrong here. also how do you compute the tangent and binormal before using them?

OOops! My mistake, you are both totally right, I am stupid… :slight_smile:

I am starting to think that there is a problem with the texture space transformation too. I will go ahead and give the code below. The code adds a single polygon to a mesh. I updated the code since the first post and I am not sure whether this works for polygons with more than 3 vertices but right now I only work with triangles so it shouldn’t matter.

V0,V1,V2 are the vertices of the triangle and S and T are texture coordinates and I am solving the following system:

V1-V0 = (S1-S0)TAN - (T1-T0)BITAN
V2-V0 = (S2-S0)TAN - (T2-T0)BITAN

-compute face normal
-compute face tangent
-for each vertex in the polygon
----compute the new average of the normal/tangent/bitangent of the vertex in the mesh
-add polygon to the mesh

Current screenshot

Mesh::addPolygon( Polygon p )
	//Calculate face normal
	Vector<int>* indices = p.getIndices();
	Vec3D vA = vertexList_[ (*indices)[0] ].position;
	Vec3D vB = vertexList_[ (*indices)[1] ].position;
	Vec3D vC = vertexList_[ (*indices)[2] ].position;

	Vec3D normal = cross( vB - vA, vC - vA );

	p.setNormal( normal );

	//Calculate tangent/bitangent
	Vec3D Q1 = vB - vA;
	Vec3D Q2 = vC - vA;

	Vec2D vAtexCoord = vertexList_[ (*indices)[0] ].texCoord;
	Vec2D vBtexCoord = vertexList_[ (*indices)[1] ].texCoord;
	Vec2D vCtexCoord = vertexList_[ (*indices)[2] ].texCoord;

	float S1 = vBtexCoord.x - vAtexCoord.x;
	float T1 = vBtexCoord.y - vAtexCoord.y;
	float S2 = vCtexCoord.x - vAtexCoord.x;
	float T2 = vCtexCoord.y - vAtexCoord.y;

	float det = 1.0f / ( S1 * T2 - S2 * T1 );

	Vec2D r1( T2, -T1 );

	Vec3D tangent( det * dot( r1, Vec2D( Q1.x, Q2.x ) ),
		       det * dot( r1, Vec2D( Q1.y, Q2.y ) ),
		       det * dot( r1, Vec2D( Q1.z, Q2.z ) ) );

	Vec3D bitangent = cross( normal, tangent );

	//Update vertex normals
	for( uInt i = 0; i < indices->size(); i++ )
		Vertex *v = &vertexList_[(*indices)[i]];

		v->normal = v->normal + normal;

		v->tangent = v->tangent + tangent;

		v->bitangent = v->bitangent + bitangent;


see this sample: click
its done with FFP so the tangent, binormal, normal things are submitted and calculated as they should.

Thanks that looks like a good sample I will take a look at it.