Normal mappping using shaders

hello,

i want to calcule the normal mapping on a texture using shaders, but i have a problem in the fragment shader because i´m a bit lost, i have calculated the TBN matrix and pass the normal to the fragment shader, now how can i apply a formula to clamp the Normal as a RGB value for the range [0,1]? using something like this :

Normal.x = ( Red – 0.5 ) * 2.0;
Normal.y = ( Green – 0.5 ) * 2.0;

i don´t know if these formulas is actually 100% correct (i think they are),anyway the output of the shader should be a normal map and also if i want to be able to just add the original color from the texture so i can see the normal mapping already applied.i´m missing something because right know i can only see the normals on the geometry and not on the texture it self…

here is the code for the vertex shader:


uniform vec3 binormalVec;
uniform vec3 tangentVec;
varying vec3 normal;


void main (void)
{



gl_TexCoord[0] = gl_MultiTexCoord0;//texture mapping stuff

// Normal to tangent space
    vec3 tangentVec = normalize(gl_NormalMatrix * tangentVec);
    vec3 binormalVec = normalize(gl_NormalMatrix * binormalVec);
    vec3 n = normalize(normal);
    mat3 matrixTBN = mat3(tangentVec, binormalVec, n);

    //Convert normal in tangent space
    normal = normal * matrixTBN;

gl_Position = ftransform();
}




and the fragment shader:


uniform sampler2D tex;//must initialize with texture unit integer

varying vec3 normal;


void main (void)
{
vec3 N = vec3(0.0, 0.0, 1.0);
    N = normalize( ( (texture2D(tex, gl_TexCoord[0].xy).xyz) - 0.5) * 2.0 );

gl_FragData[0].r=N.x;
gl_FragData[0].g=N.y;
gl_FragData[0].b=N.z;
	
}




can someone help me out?!
Many Thanks

I don’t entirely understand your question. The formulas (for color to normal conversion) look right. I don’t know the actual math for what’s going on the vertex shader, so I’ll assume that’s all right also.

You want the same texture to define both the normal and the color of a particular point?

“i can only see the normals on the geometry and not on the texture it self…”
How are you looking at the geometry and the texture? Are you modifying the texture? If not, it shouldn’t change.

i want to be able so see the normal map of the texture, then later add the texture it self, right know i would just like to write the normal as a RGB value on the fragment shader but i can´t seem to understand how i do that, do i need to pass the TBN matrix to the fragment?!!

the first time i tryied i was just passing the normal of each vertex to the fragment that´s why i could only see the normal os the vertices and not on the texture as it should be doing…so right know let´s just forget that i said that.

is it possible to access the RGB value of each pixel from the texture, on the fragment shader it self? so i can calcule the value of the normal of each pixel, so i can write the normal as a rgb value?!!

In the vertex shader i just put the normal in the tangent space , insted of being in world coordinates…

i want to do something like this:

but i get this error that i can´t combine a vector with a uniform sample 2d how can i access the RGB info on the texture and interploate with the normal?
n.x = ((n * tex.r)-0.5) *2.0;

I had similar problem recently, solved it.
I used follwing shaders (simple bumpmapping, lighting per pixel and fog),they are not optimized, but working, and I used floats instead of booleans (somebody claimed they are faster)

vertex:


varying float fogFactor;
varying vec3 pos;
varying mat3 TBN;

uniform float bMode;

attribute vec3 tangent;



void main( void )
{
	//const vec3  wievPos   = vec3( 0.0, 0.0, 1.0 );
	const float density   = 0.01;
	
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	
	if(bMode != 0.0)
	{
		;
	}
	else
	{
		vec4 V = gl_ModelViewMatrix * gl_Vertex;
		pos = V.xyz;
		
		fogFactor = exp( -pow(density * (length( V ) - 160.0) , 2.0 ) );
		gl_TexCoord[0] = gl_MultiTexCoord0;
		gl_FrontColor = gl_Color;
		
		vec3 t = gl_NormalMatrix * tangent;
		vec3 n = gl_NormalMatrix * gl_Normal;
		TBN = mat3( t, cross( t, n), n );
		
	}
}

and fragment:


uniform vec4 lightColor;
uniform vec4 fogColor;

uniform float bumpmapping;
uniform float texturing;

varying float fogFactor;
varying float dist;
varying vec3  pos;

varying mat3 TBN;

uniform vec3  lightPosition;
uniform float bMode;

uniform sampler2D map;
uniform sampler2D tex;

const float shininess = 128.0;
const vec4  ambient   = vec4( 0.2, 0.2, 0.2, 1.0 );

const float constantAttenuation  = 0.1; 
const float linearAttenuation    = 0.001;
const float quadraticAttenuation = 0.0002;

void main( void )
{
	vec4 color = ambient;
	
	if(bMode != 0.0)
		gl_FragColor = vec4( 0.0, 0.0, 0.0, 1.0 );
	else
	{
		vec3 nn;
		if(bumpmapping != 1.0)
			nn = TBN[2];
		else
		{
			nn = texture2D( map, gl_TexCoord[0].st ).rgb;
			nn = TBN * normalize( 2.0 * nn - vec3(1.0) );
		}
		
		vec4 texColor = vec4( 1.0, 1.0, 1.0, 1.0 );
		
		if(texturing == 1.0)
			texColor = texture2D( tex, gl_TexCoord[0].st );
		
		vec3 lDirection =  normalize( lightPosition - pos );
			
		float dist = distance( pos, lightPosition );
		float att = 1.0 / ( constantAttenuation + dist*linearAttenuation + dist*dist*quadraticAttenuation );
		
		
		vec3 R = reflect(-lDirection, nn);
		
		float specular = pow( max( dot( nn, R ) , 0.0 ), shininess );
		color += clamp( att * max( dot( nn, lDirection ), 0.0 ) * gl_Color * lightColor + gl_Color * specular * att, 0.0, 1.0 );
			
		gl_FragColor = mix( fogColor, clamp( texColor * color, 0.0, 1.0 ), fogFactor );
		//gl_FragColor =vec4( abs(nn), 1.0 );/*only if you want to 'see' the normals*/
		
	}
}

screens:

Hope it helped.

ps. Sorry about my poor English.

hello kowal,

many thanks for your reply these are the lines i didn´t know about, and was looking for,

"nn = texture2D( map, gl_TexCoord[0].st ).rgb;
nn = TBN * normalize( 2.0 * nn - vec3(1.0) );
"

i think now everything looks ok but i still have a question in the fragment shader, should it be done by the 1st way, in the shader designer looks like this:

and with the second way it looks like this:

can someone test my shaders and let me know witch way is the best to go, i think the second looks ok but i´m not sure about it here is the code for both vertex and frag:

Vertex:


varying vec3 pos;
varying mat3 TBN;

uniform vec3 tangent;

void main( void )
{
	
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	

	vec4 V = gl_ModelViewMatrix * gl_Vertex;
	pos = V.xyz;
		

	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_FrontColor = gl_Color;
	
	/* Calculate TBN matrix */	
	vec3 t = gl_NormalMatrix * tangent;
	vec3 n = gl_NormalMatrix * gl_Normal;
	TBN = mat3( t, cross( t, n), n );
		
	
}


Frag:


uniform sampler2D tex;

varying vec3  pos;
varying mat3 TBN;


void main( void )
{
	/*normal at each fragment*/
	vec3 nn;

	/* get the rgb value at texture rgb channel */
	nn = texture2D( tex, gl_TexCoord[0].st).rgb;

	/* calculate the normal - range [0,1]   */
	nn = TBN * normalize( 2.0 * nn - vec3(1.0) );
	
		//1st Option 
//	gl_FragColor =vec4(abs(nn), 1.0 );/*only if you want to 'see' the normals*/
	  
	     //2nd Option
	gl_FragColor =vec4(nn.x,nn.y,1.0, 1.0 );/*only if you want to 'see' the normals*/	
	
}


ps:the original texture is this one, also i think there should be some green in the 2nd option, the color is written as (Nx, Ny, 1.0)…

For normal/bump mapping you need 2 textures:

  • actual texture (contains colors, in my shaders this one is labeled as tex, it’s not absolutely necessary, but final effect is much better with texture).

  • normal map (in my shader referenced as map) - to generate normal map from image use, for example, nvidia texture tools (free to download, I use them).

If I’m not wrong you just tried to use regular texture as normal map.

kowal ,that´s exactly what i want to do, i going to use MRT and will be rendering to an texture so i need to find the normal of each frame (texture), so later i can compose the final image, with more textures…am i going in the wrong way?!

Cheers

I’m still unsure what are You trying to achieve. Do You want to generate normal map from image?

I found some useful info about this here:
http://www.katsbits.com/htm/tutorials/creating_bumpmaps_from_images.htm

and here:

http://web.cs.wpi.edu/~matt/courses/cs563/talks/bump/bumpmap.html

hello kowal,

i have seen loads of examples on the web about bump map / normal map and how to use them, but the thing is that everyone already has a normal map of that texture to apply to the object and in the shader…in my case it´s a little bit diferent because, i´m going to render to a texture , using FBO, so everyframe will be a texture so i need to find the normals @ each texture , so later i can combine with other textures to compose the final image…i have made some improvements on the shaders and got preety good results let me show you:

Normal Map:

Texture + Normal Map:

Vertex Shader:


uniform vec3 LightPosition;  //light position 
uniform vec3 tangent;		//tangent

varying vec3 n;
varying vec3 TBN_EyeDir;		//Eye direction in tangent space
varying vec3 TBN_LightDir;      //light position in tangent space
varying vec3 TBN_HalfVector;    // Half vector   in tangent space
varying mat3 TBN;	            //TBN matrix

void main( void )
{

	//transformation of the vertex to clip space
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	
	// Texture coordinates
    // Multiplication with texture matrix can be omitted if default (identity matrix) is used. To
    // use the texture matrix comment the first and uncomment the second line.
    gl_TexCoord[0] = gl_MultiTexCoord0;
    // gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
	//	gl_FrontColor = gl_Color;
	
	/* transformation of the vertex to eye space
	  Vertex coords from eye position */
	vec4 eV = gl_ModelViewMatrix * gl_Vertex;
	// pos in 3d space
	TBN_EyeDir = vec3(eV)/eV.w;
	
	/* Calculate TBN matrix
	// Tangent space vectors (TBN)
    // The binormal can either be passed as an attribute or calculated as cross(normal, tangent). */	
	vec3 t = gl_NormalMatrix * tangent;
	n = gl_NormalMatrix * gl_Normal;
	vec3 b = cross( t, n);
	TBN = mat3( t, b , n );
	
	/* Light Dir to tangent space  */
	vec3 v=LightPosition-TBN_EyeDir;// Light direction from vertex, For positional lights lights
	v.x=dot(v,t);
	v.y=dot(v,b);
	v.z=dot(v,n);
	
	TBN_LightDir=normalize(v);
	
	/* Eye Dir to tangent space
	// Eye direction from vertex, for half vector
    // If eye position is at (0, 0, 0), -mvVertex points to eye position from vertex. Otherwise
    // direction to eye is: eyePosition - mvVertex  */
    v= -TBN_EyeDir;
	v.x=dot(v,t);
	v.y=dot(v,b);
	v.z=dot(v,n);
	
	TBN_EyeDir=normalize(v);
	
 // Half-vector for specular highlights
   	TBN_HalfVector = normalize(TBN_LightDir + TBN_EyeDir);
		
}



Fragment Shader:



uniform sampler2D tex;

varying vec3 TBN_EyeDir;		//Eye direction
varying vec3 TBN_LightDir;     //light position
varying vec3 TBN_HalfVector; // Half vector in tangent space
varying mat3 TBN;	      //TBN matrix


void main( void )
{
	 // Base colour from texture
    vec4 baseColour = texture2D(tex, gl_TexCoord[0].xy);
      
    // Uncompress normal from normal map texture
 	// vec3 normal = normalize(texture2D(tex, gl_TexCoord[0].xy).xyz * 2.0 - 1.0);
    // Depending on the normal map's format, the normal's Y direction may have to be inverted to
    // achieve the correct result. This depends - for example - on how the normal map has been
    // created or how it was loaded by the engine. If the shader output seems wrong, uncomment
    // this line:
    // normal.y = -normal.y;
    
     /**/ vec2 c= texture2D( tex, gl_TexCoord[0].xy).rg;
	vec2 p= (fract(c) - vec2(0.5));	
	vec3 normDelta=vec3(p.x,p.y,1.0);
	vec3 normal;
	normal = TBN * normalize( (normDelta* 2.0) - 1.0 );
	
//	 vec3 normal = normalize(texture2D(tex, gl_TexCoord[0].xy).xyz * 2.0 - 1.0);
 //    normal = TBN * normal;
    
    // Ambient
    vec4 ambient = (0.1,0.1,0.1,1.0) * baseColour;
    
    // Diffuse
    // Normalize interpolated direction to light
    vec3 tbnNormDirToLight = normalize(TBN_LightDir);
    // Full strength if normal points directly at light
    float diffuseIntensity = max(dot(tbnNormDirToLight, normal), 0.0);
    vec4 diffuse = (0.3,0.3,0.3,1.0) * baseColour * diffuseIntensity;
    
    // Specular
    vec4 specular = vec4(0.0, 0.0, 0.0, 1.0);
    
     // Only calculate specular light if light reaches the fragment.
    if (diffuseIntensity > 0.0) {
        // Colour of specular reflection
       // vec4 specularColour = texture2D(specularMap, gl_TexCoord[0].xy);
         vec4 specularColour = vec4(0.2, 0.2, 0.2, 1.0);
        // Specular strength, Blinn�Phong shading model
        float specularModifier = max(dot(normal, normalize(TBN_HalfVector)), 0.0); 
        specular = specular * specularColour * pow(specularModifier, 5.0);
    }

	 // Sum of all lights
    gl_FragColor = clamp(ambient + diffuse + specular, 0.0, 1.0);

   // gl_FragColor =vec4(normal, 1.0 );


i have been running both shaders mine and yours to see the results and they both look preety good let me know what do you think, does it look good or not?

ps:if you ever come to portugal, let me know , we´ll go for a beer :slight_smile:

I’ ve never generated normal maps myself, so I don’t know this is a good way to do it:

  • emboss image:
    I used following kernel:

GLfloat embossKernel[] = {
2.0f,  1.0f,  0.0f,
1.0f,  1.0f, -1.0f,
0.0f, -1.0f, -2.0f};

grayscale it (according to NTSC weights: 0.3 Red, 0.59 Green, 0.11 Blue).
screens:
original img:

after transformation:

-use generated image as ‘fake heightmap’:
black represents portions of image that are plced ‘below’ it’s surface, gray represents actual surface, and white represents parts that are ‘above’.
now the hardest part: how to obtain normals from that? (I’ve also read many tutorials, but as You said everyone already has a normal map).
Maybe take color difference between texels in each direction?
eg. like that(for each texel):

  1. take texels below and above our current texel, calcualte difference in color between them, and put that in G channel of our output image.
  2. take texels on the right and left of our current texel, calcualte difference in color between them, and put that in R channel of our output image.
  3. set B channel to 1.0 (max)
  4. normalize(R,G,B)

I’ll try to implement that soon (or not so soon, I’m going on the vacation next week).

ps. Your Images look cool.

ok, This method of generating normal map from image works (previous post) after small adjustments
(someone correct me if it isn’t correct way):

it requires 2 passes, one to emboss, and grayscale the image, and second to generate normalmap.

my shaders:
vertex (minimal, both passes share the same vs):


void main( void )
{
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

fragment (emboss):


uniform vec2  offset[9];
uniform sampler2D tex;

const vec3 ntsc = vec3( 0.3, 0.59, 0.11 );
const float bias = 0.0;

uniform float kernel[9];
	
void main( void )
{
	vec4 color = vec4( 0.0 );
	
	for(int i = 0;i < 9;i++)
		color += kernel[i] * texture2D( tex, gl_TexCoord[0].st + offset[i] );
	
	float biasedGray = bias + dot( color.rgb, ntsc );
	gl_FragColor = vec4( biasedGray, biasedGray, biasedGray, 1.0 );
}

I rendered full-screen quad to fbo using this shader, and then did this again(another fbo) with this fs:


uniform vec2 offset[9];
uniform sampler2D heightmap;
uniform float smoothness;

uniform float diagonal;

/* offsets:

 0  1  2
 3 [4] 5
 6  7  8
*/
void main( void )
{
	vec4 samples[9];
	
	for(int i = 0;i < 9;i++)
		samples[i] = texture2D( heightmap, gl_TexCoord[0].st + offset[i] ); /* r=g=b (grayscale image) */
	
	float Y = 1.0 + samples[1].r - samples[7].r;
	float X = 1.0 + samples[3].r - samples[5].r;
	
	float Q = 0.0;
	float W = 0.0;
	
	if(diagonal == 1.0)
	{
		Q = 1.0 + samples[0].r - samples[8].r;
		W = 1.0 + samples[2].r - samples[6].r;
	}
	
	gl_FragColor = vec4( normalize( vec3( X + (Q+W), Y + (Q+W), smoothness + 4.0 * diagonal) ), 1.0 );
/*(smoothness + 4.0 * diagonal) - added 4.0 * diagonal only to keep 'smoothness scale' consitient*/
}

and used resulting image as a normalmap.
results (left column, from top: original texture, embossed & grayscaled, normalmap):


for me bump-mapped cube looks ok. Any thoughts?

hello again,

sorry i have been out for a few days, your shaders look amazing, can´t wait to try them, just a thought since you are using the 2nd derivat of the image, in order to get the normal @ each pixel , isn´t it the same result if we multipliy the pixel value with the TBN matrix?

ps: awsome shaders…really good stuff

I think it’s not enough to multiply pixel by the TBN (tried that and got the garbage). There are are two functions in GLSL: dFdx and dFdy which are supposed to give derivatives (of what?). I know nothing about them (searching the spec right now), maybe there is a way to use them instead of my ‘fake derivatives’?

The spec has the nuts and bolts, and there’s a terrific overview of the entire texture sampling process in the Siggraph Asia 2008 “Modern OpenGL” presentation.

In practice I think the true gradients are approximated with forward or backward differences in a sample quad (2x2 pixels). Basically they’re just a difference in values between the neighbouring pixels.