Doom3 shading questions

Originally posted by Nil_z:
but what should i do with the pixels on the right&bottom edge?
Depends on the texture clamping. If you want a repeated normal map, you wrap around like (x+1)%width. Otherwise you take the rightmost pixel twice.

why don’t they use another normalmap instead of the heightmap to add to the base normalmap?
anyway, i will try the heightmap to normal converting method.

btw, studying the rendering method in DOOM3 is quite interesting. I’d like to render some models in DOOM3 to see the result in my rendering code. does anyone know where i can find the md5mesh spec of the released DOOM3? I can only find some spec of the alpha.

Hi!

Either take a look at www.doom3world.org/phpbb2/viewtopic.php?t=2884 or simply use the Doom3 SDK available at http://www.iddevnet.com/

Gordon

To manualy convert height maps to normal maps would take unnecessary time and loss of flexibility. The addnormals function could take a Strength parameter which the artist could specify in the material file.

I´ve written a DOOM 3 model loader with full animation support, both for version 6 (alpha) and 10 (release). It´s interesting to see the tangent-space symmetry seams (mirrored texcoords) in the release models, which practically isn’t visible in the alpha models. DOOM 3 seems to only take polygons with the same “handedness” in account when computing vertex normals…

Specs can be found here: http://www.doom3world.org/phpbb2/viewtopic.php?t=2884

there is no per vertex normal info in DOOM3 model data. How should i calculate the tangent space transformation?

Originally posted by Nil_z:
there is no per vertex normal info in DOOM3 model data. How should i calculate the tangent space transformation?
You can always compute the normals yourself on model load up.

-SirKnight

there is no smooth setting either, should I average all normals of polygons that share the same vertex to create vertex normal?

Originally posted by Nil_z:
there is no smooth setting either, should I average all normals of polygons that share the same vertex to create vertex normal?
correct

I almost finished the MD5mesh viewer, it seems work fine, thanks for all the help from here. But i noticed there are some strange bright lines over the model when the light moves, like the specular light is not correct. the screenshot can be found here:
http://gba.ouroad.org/misc_download/2004/10/screenshot.JPG_1098866407.jpg
it only appears on the left side of the model,
anyone knows what is wrong?

Originally posted by Sunray:
To manualy convert height maps to normal maps would take unnecessary time and loss of flexibility. The addnormals function could take a Strength parameter which the artist could specify in the material file.

And you can… The addnormals doesn’t take that parameter, but the heightmap function does, and since you then add both normals, it is like having a strength parameter…

The specular lighting looks completely wrong. How do you calculate it?

the strange specular only appears when the light is at certain angle, at other times it looks OK to me. Maybe it is a method that only looks right sometimes :stuck_out_tongue:

here is my vp&fp, the idea is to dot normal with the half angle vector, power it, multiply with texel from specular map and add to the result color. I hope the idea is not wrong.

!!ARBvp1.0 OPTION ARB_position_invariant;

ATTRIB iTex0 = vertex.texcoord[0];
ATTRIB tangent = vertex.texcoord[1];
ATTRIB bitangent = vertex.texcoord[2];
ATTRIB normal = vertex.normal;

PARAM mvi[4] = { state.matrix.modelview.inverse };
TEMP lightpos, lightvec,halfvec, temp;

DP3 lightpos.x, mvi[0], state.light[0].position;
DP3 lightpos.y, mvi[1], state.light[0].position;
DP3 lightpos.z, mvi[2], state.light[0].position;

#i am using directional light, normalize to get light direction
DP3 temp, lightpos, lightpos;
RSQ temp, temp.x;
MUL lightvec.xyz, lightpos, temp.x;

#vector from vertex to camera
DP3 temp, vertex.position, vertex.position;
RSQ temp, temp.x;
MUL temp, -vertex.position, temp;

#get half angle vector
ADD halfvec, temp, lightvec;
#normalize half angle vector
DP3 temp, halfvec, halfvec;
RSQ temp, temp.x;
MUL halfvec, halfvec, temp;
#transform to tangent space
DP3 result.texcoord[1].x, lightvec, tangent;
DP3 result.texcoord[1].y, lightvec, bitangent;
DP3 result.texcoord[1].z, lightvec, normal;
DP3 result.texcoord[2].x, halfvec, tangent;
DP3 result.texcoord[2].y, halfvec, bitangent;
DP3 result.texcoord[2].z, halfvec, normal;
MOV result.texcoord[0], iTex0;
END;

!!ARBfp1.0
PARAM lightcolor = state.light[0].diffuse;
PARAM ambient = state.light[0].ambient;
PARAM const = {32.0, 0.0, 0.0, 0.2};
TEMP normal, temp, lightvec, texel,spec, halfvec;
TEX texel, fragment.texcoord[0], texture[0], 2D;
TEX normal, fragment.texcoord[0], texture[1], 2D;
TEX spec, fragment.texcoord[0], texture[2], 2D;

#calc normal from normalmap
MAD normal, normal, 2.0, -1.0;
DP3 temp, normal, normal;
RSQ temp, temp.x;
MUL normal.xyz, normal, temp;

#normalize light direction
DP3 temp, fragment.texcoord[1], fragment.texcoord[1];
RSQ temp, temp.x;
MUL lightvec, fragment.texcoord[1], temp;

#normalize half angle vector
DP3 temp, fragment.texcoord[2], fragment.texcoord[2];
RSQ temp, temp.x;
MUL halfvec, fragment.texcoord[2], temp;

dot normal with half angle vector

DP3_SAT halfvec, halfvec, normal;
POW halfvec, halfvec.x, const.x;

DP3_SAT temp, normal, lightvec;
MAD_SAT temp, lightcolor, temp, ambient;
MUL temp, texel, temp;

#add specular to result
MAD_SAT result.color, halfvec, spec, temp;
END;

In your vertex shader code… how do you calculate the vector from vertex to camera ? You don’t even use the camera position - unless you implicitely assume that the camera position is always at (0,0,0) ??

Y.

i am using OPTION ARB_position_invariant in vertex program, which means fixed function pipeline does the position computing. I think the vertex position I get in vertex program has been transformed into view space, so the camera position is always 0,0,0. Maybe I am wrong, I’ll try transforming the position by vertex program.

#vector from vertex to camera
DP3 temp, vertex.position, vertex.position;
RSQ temp, temp.x;
MUL temp, -vertex.position, temp;
Ysaneya is right, this code assumes the camera to be static at position (0,0,0). You need to pass the camera position to the vertex program, if you want to move it around in your scene.
In a position invariant program, the vertices will be transformed after the program execution. So the vertex is in world or whatever space it was in during program execution.

"SUB halfVec, viewPos, vertex.position;
"
"ADD halfVec, lightVec, halfVec;
"

This will give you the half angle vector for the specular term. viewPos is the camera position in world space and lightVec the vector from the vertex to the light source (or a constant vector for directional lights). Take care with normalization.

EDIT: typo

I have changed my vertex program to get camera position by transfer (0,0,0) with inverse modelview matrix. I should set this position by some program env parameter, but right now it is just a test. The specular light moves with diffuse light now, looks ok. The strange line of specular light is because sometimes when the light is coming from behind, I still have positive value from half angle vector dotproduct normal. I am now using (H.N)*(N.L), like in the ARB_fragment_program spec.

BTW, about adding hightmap to normalmap, instead of add the two normal together, I calculate the offset between the normal from heightmap and (0,0,1), then add the offset to normalmap. It gives much better result than just add the two normal.