Good day folks.
As a new visitor and recent user of ARB_vertex_program, I’m facing a problem while transfering the calculation of a diffuse bumpmapping effect from the CPU to the GPU.
In the previous version of my work (without any vertex program) the effect is realized on the CPU using a classical combination of normalizing cube map, normal map and base map. The light is transformed into object space. Then the L vector from vertex to light is projected into the local space and normalized by the cube map, then again dot3-multiplied with the normal map.
Nothing but classical.
It works just fine. But the vertex program I’m now writing for the same purpose is not bound to render the fine effect. Actually the object renders but the bumpmapped light effect reacts as the only camera position changes ! You may say I have a matrix problem, but I can’t find what in the world differs in the way I compute light position and normals form the old way (on CPU) to the new one (on GPU).
If I’m allowed to, I’d like to write down the process of my GL calls preceding the glDrawArrays(), then followed by the guilty vertex program.
here is the process:
- centering the modelview matrix to the object
- transforming light position into object space (using inverse modelview matrix)
- passing this relative position to the GL (glColor(GL_LIGHT0, GL_POSITION, relativeSrcPos))
- pointing to the vertex array
- pointing to the normal array
- pointing to the texCoord for TMU0 (cube map)
- pointing to the texCoord for TMU1 (normal map)
- pointing to the texCoord for TMU2 (base map)
- pointing to S and T arrays as generic parameters 11 and 12 of the vertex program
And here is my guilty vertex program:
ATTRIB iPos = vertex.position;
ATTRIB iColor = vertex.color;
ATTRIB iNormal = vertex.normal;
ATTRIB iTexCoord0 = vertex.texcoord;\ #cube map
ATTRIB iTexCoord1 = vertex.texcoord;\ #normal map
ATTRIB iTexCoord2 = vertex.texcoord;\ #base map
ATTRIB coordS = vertex.attrib;\ #generic parameter 11 (S-array element)
ATTRIB coordT = vertex.attrib;\ #generic parameter 12 (T-array element)
PARAM lightPos = state.light.position;
TEMP vertexToLight;\ #L vector
OUTPUT oColor = result.color;
OUTPUT oTexCoord0 = result.texcoord;
OUTPUT oTexCoord1 = result.texcoord;
OUTPUT oTexCoord2 = result.texcoord;
compute the light L vector
SUB vertexToLight, lightPos, iPos;
DP3 vertexToLight.w, vertexToLight, vertexToLight;
RSQ vertexToLight.w, vertexToLight.w;
MUL vertexToLight.xyz, vertexToLight.w, vertexToLight;
compute the cube map texture coords
projecting L onto S, T and N
DP3 oTexCoord0.x, coordS, vertexToLight;
DP3 oTexCoord0.y, coordT, vertexToLight;
DP3 oTexCoord0.z, iNormal, vertexToLight;
MOV oColor, iColor;
MOV oTexCoord1, iTexCoord1;
MOV oTexCoord2, iTexCoord2;
(As you can see I ignore vertex calculation)
To me, as the GPU enters this program, normal as well as light position should be considered in the object space, therefor no transformation should be needed.
But I must have a matrix problem or something, the bumpmapped effect reacts to a slightly camera rotation. May I miss some rule in the transfering process of vertex attributes ?
I read and read tutorials and examples but can’t find what does mess my once-beautiful effect.
I thank you for your attention and your any ideas.