NVidia PerPixel Reflections - getting desperate now...

I’m trying to get per-pixel cubemap reflections working on my geforce3.
I’m using a 100% NVidia path, for experimental sake - that means nv_vertex_program, nv_texture_shader and nv_register_combiners.
I’m calculating my per-vertex normals and tangents correctly (normalised) - I’m passing the normal through the NORMAL channel, and my tangents through the TEXCOORD0 channel.

I’m getting pretty depressed.
I’m trying to get this working on a mesh representing the ground (a planar surface on the XZ plane).

Attempt to describe the symptoms:
At the moment, my camera starts looking down the Z axis. If I tilt my head down towards the reflection mapped ground, the reflection moves upwards, which is correct. Now, if I turn 90 degrees to my left (so I’m looking down the X axis), and look up and down, the reflection seems to rotate around the X axis, in other words it appears to ‘roll’ around the axis I’m looking down.

My normal maps have the ‘up’ vector in the blue component.

Here’s my vertex program constants:

const 0 worldXprojmat
const 6 uservalue 0 0 0 0.01 // .w = bumpmap texcoord scale
const 8 worldmat // modelview matrix

Here’s my vertex program:-


transform position into clip space

DP4 o[HPOS].x, c[0], v[OPOS];
DP4 o[HPOS].y, c[1], v[OPOS];
DP4 o[HPOS].z, c[2], v[OPOS];
DP4 o[HPOS].w, c[3], v[OPOS];

move tangent vector from texcoord0 into R0

MOV R0, v[TEX0];

move normal vector from normal into R2

MOV R2, v[NRML];

calculate binormal = R1 = crossproduct(R0,R2)

MUL R1, R0.zxyw, R2.yzxw;
MAD R1, R0.yzxw, R2.zxyw, -R1;

transform tangent into eye space

DP3 R5.x, R0, c[8];
DP3 R5.y, R0, c[9];
DP3 R5.z, R0, c[10];

transform binormal into eye space

DP3 R6.x, R1, c[8];
DP3 R6.y, R1, c[9];
DP3 R6.z, R1, c[10];

transform normal into eye space

DP3 R7.x, R2, c[8];
DP3 R7.y, R2, c[9];
DP3 R7.z, R2, c[10];

transform position into eye space

DP4 R4.x, c[8], v[OPOS];
DP4 R4.y, c[9], v[OPOS];
DP4 R4.z, c[10], v[OPOS];
DP4 R4.w, c[11], v[OPOS];

Build TBN matrix

MOV o[TEX1].x, R5.x;
MOV o[TEX1].y, R6.x;
MOV o[TEX1].z, R7.x;
MOV o[TEX1].w, -R4.x;

MOV o[TEX2].x, R5.y;
MOV o[TEX2].y, R6.y;
MOV o[TEX2].z, R7.y;
MOV o[TEX2].w, -R4.y;

MOV o[TEX3].x, R5.z;
MOV o[TEX3].y, R6.z;
MOV o[TEX3].z, R7.z;
MOV o[TEX3].w, -R4.z;

generate some texture coordinates from vertex position for bumpmap

MUL o[TEX0].xyz, c[6].w, v[OPOS].xzyw;


Here’s my texture shader:


The register combiner just outputs tex3 - which works fine.

Some kind of help would be really appreciated.

Ok, I’ll tell you how far I’ve got in my thinking.
This is my understanding of the following terms:-
> Local-space (aka Object-space) = the space within which the vertices of the object are defined in.
> World-space = the space within which the object space exists - the object space is positioned/orientated in the world space (and therefore all its vertices)
> Eye-space = the space within which world space the world space exists after the cameras transform is applied.
> Clip-space = the space within which eye space exists after the perspective transformation.

Now, all the dotproducteyereflect demos I’ve seen talk about eye space and cubemap space.
Is cubemap space essentially world space without the camera transform?
It says that I should transform the TBN vectors into cubemap space, but this doesn’t work…it seems that because I’m using the modelview matrix for 2 things (transforming my objects into world space + transforming them into eye space) I can’t get my perpixel cubemapping to work…ah, I’m confused…please someone spell it out to me, please please please. My cubemap is in world space, I don’t want it to rotate or anything, it just represents the environment, but it does rotate when the camera rotates I suppose…