Environment mapping with normalmap displacement

First off, I need to mention I haven’t started writing this shader yet; I’m brainstorming. I have written normalmap perpixel lighting in GLSL, and I have just finished a working non-bumpmapped environment map shader for diffuse lighting and pseudo-reflections.

So, basically, I want to be able to perturb cubemap lookup by the normal map texture lookup, and I’m hitting a wall mentally.

In my normal map lighting shaders I – and as far as I can tell this is the de-facto approach – convert the light vector and eye vector to tangent space in the vertex shader, and simply perturb those by the normal map in the fragment shader. Easy-peasy.

But my environment mapping shader acts in “world” space; computing the fragment location and eye vector in world space ( via passed in model matrix uniform ). I obviously can’t just perturb that by the normal map.

I’ve googled ( a lot! ) and haven’t had any luck. 99% of what I find is tex-combining dot3 stuff from the fixed-function days, and the rest is asm for DX, which I don’t grok. And I’ve found no “high level” algorithm descriptions to help me out. If anybody can give me an idea of how to approach this, or even some sample code, I’d be greatly in your debt.


The only way to do cubemap lookups is via a world space vector (cubemap are essentially world space). For per-pixel shaders this means you need to rotate the normal vector to world space for each pixel, preferably with a mat3 multiply. AFAIK, there’s no alternative way to do this.

Anyone ever tried quaternion rotation for better performance?

But I’m no authority on this, I’d be interested to hear if there are better ways.

That’s kind of what I was expecting. I was trying to figure out if I could multiply the per-pixel normal by the inverse of the tangent space matrix and then by the model matrix to bring it into world space, but I don’t think my GPU has the horsepower for that.

At the very least, I don’t know if I can invert a matrix in GLSL…

It’s not that hard. Rotational matrices are orthogonal,so the transpose of the matrix is the inverse! If your normal-tangent-bitangent are orthogonal then just use inverse matrix multiplication which in glsl is easily accomplished by notation of the form vec3mat3 instead of mat3vec3. Shade on!

Y-Tension: Thanks, that’s something I didn’t know! You learn something every day…

So I took a stab at the algorithm ( using a flat normalmap ( 128,128,255 )) for debugging, and the results are plausible but subtly incorrect. In general, moving the camera around the reflections seem about right, there are some discrepencies between the normalmap perturbed reflection and the flat non-perturbed.

I strongly suspect that the issue is my tangent vector generation.

That being said, I think the algorithm is correct now! So thank you!

I’m posting the relevant bits of my code below. Hopefully, somebody here can look it over just in case I did do something boneheaded!

vertex shader

varying vec3 IBLWorldPosition;
varying vec3 IBLTangentSpace_T;
varying vec3 IBLTangentSpace_B;
varying vec3 IBLTangentSpace_N;

/// code

IBLWorldPosition = (ModelMatrix * gl_Vertex).xyz;

IBLTangentSpace_N = normalize(gl_Normal);
IBLTangentSpace_T = normalize(gl_SecondaryColor.xyz); // Tangent stored in gl_SecondaryColor
IBLTangentSpace_B = cross(IBLTangentSpace_N, IBLTangentSpace_T);	

fragment shader

uniform samplerCube IBLDiffuseMap;
uniform samplerCube IBLSpecularMap;
uniform sampler2D NormalMap;
uniform vec4 Specular;

uniform mat4 ModelMatrix;
uniform vec3 CameraPosition;

varying vec3 IBLWorldPosition;
varying vec3 IBLTangentSpace_N;
varying vec3 IBLTangentSpace_T;
varying vec3 IBLTangentSpace_B;

/// code

vec3 normalMap = texture2D(NormalMap, gl_TexCoord[0].st).xyz;
vec3 bump = normalize( normalMap * 2.0 - 1.0);
mat3 TM = mat3( normalize( IBLTangentSpace_T ),
	            normalize( IBLTangentSpace_B ),
	            normalize( IBLTangentSpace_N ));

mat3 RM = mat3( ModelMatrix[0].xyz,
	            ModelMatrix[2].xyz );
vec3 worldNormal = normalize( RM * ( bump * TM ) );
vec3 reflectDir = normalize( reflect( normalize( IBLWorldPosition - CameraPosition ), worldNormal));

// now sample from the diffuse map with worldNormal and sample the specular map with reflectDir

Anything obviously foobar here?

might be related,

I chose the “lazy” path.
I took the normalmap shader (which works in tangent space) and just added code that transforms reflected vector back into world space using 3x3 matrix made of normal, binormal and tangent vectors, which are passed to vertex shader anyway (I just needed to pass these to fragment shader). :slight_smile:

Can you show me how?

Here are some screenshots that will clarify the issue. First, two screenshots of non-cubemapped rendering of a torus. The first has vanilla lambertian illumination, the second uses a flat normalmap. As is expected, the two render identically.


Flat normalmap:

Now, here’s a non-normalmap perturbed cubemap rendering on the torus. Looks and behaves correctly.

And, with the flat normalmap, I’d expect the same output, but nope!

So, I know I’m doing something very wrong. My normalmap perturbed lighting is correct, so I’m confident that my tangents ( computed on cpu and passed to the shader ) are correct. Ergo: my normalmap-purturbed cubemap lookup is malarky.

Sorry for the inline pics, but they’re needed to clarify the situation

I really like that GUI.


It’s a personally written C++ gui rendered in GL. In principle, it’s pretty OK ( some good features ), but it’s not something I’d consider resume material!


The trouble turns out to be that I don’t need to invert the tangent matrix. So in the end, the relevant part of the fragment shader looks like this:

	vec3 normalMap = texture2D(NormalMap, gl_TexCoord[0].st).xyz;
	vec3 bump = normalize( normalMap * 2.0 - 1.0);	

	mat3 TM = mat3( IBLTangentSpace_T ,
		            IBLTangentSpace_B ,
		            IBLTangentSpace_N );
	mat3 RM = mat3( ModelMatrix[0].xyz,
		            ModelMatrix[2].xyz );
	// bring bump into world space

	vec3 worldNormal = RM * TM * bump;

True, but lambert illumination uses only the normal…Are you sure the generated tangent orientations are consistent across the torus? If not, texture coordinates will vary greatly across a polygon, causing discontinuities like the ones in your photos.

Ooops just saw it was solved…

At the very least, only pass two of T, B, and N. Calculate the third from the cross product of the other two. This should be faster on most hardware (saves a normalize), and will produce cleaner results.