Doom3 shading questions

They split the room up. I haven’t seen Doom3, so I can’t say, but they probably don’t have more than one or two projected lights on any single surface.

Originally posted by Korval:
They split the room up. I haven’t seen Doom3, so I can’t say, but they probably don’t have more than one or two projected lights on any single surface.
I remember like a year or two ago John Carmack saying something about how they try not to have many lights with overlapping volumes as it would go really slow (of course that makes sense :slight_smile: ). From playing doom 3 this is definately the case, there are not many overlapping light volumes. There are some scenes where a hallway or room has like 20 or more lights but they all have very small boundaries so it runs fast.

BTW…GO GET THE DEMO NOW! hehe. :smiley:

-SirKnight

I don’t need the demo; I have the actual game. I’ve just not yet had the impetus to install it and play it.

I found in DOOM3 vertex programs, there is no code about vertex position transformation. What have they done with those positions?

the vertex programs are with the position_invariant flag, that means fixed function pipeline does the position computing.
useful for mixing shader and fixed function passes, cause position computation of a vertex prog slightly differs from fixed function.

does it cost speed, using position_invariant ?

Originally posted by Nil_z:
does it cost speed, using position_invariant ?
probably, but generally not noticeable.

Why? It should be faster, as

  1. the card may used the hardware hard-coded pipeline to compute final vertex positon, instead of running a program

  2. far more important, the early-Z discarding should be activated, boosting a lot the performances.

the card may used the hardware hard-coded pipeline to compute final vertex positon, instead of running a program
No ATi card of R300 or better has any fixed-function T&L. The NV30 line does, but the jury is still out on the NV40’s.

far more important, the early-Z discarding should be activated, boosting a lot the performances.
Not using position_invariant will not turn off early-Z. Only writing to the Z-depth in a fragment program does that (among possibly other hardware-specific fragment-based things).

Originally posted by Korval:

[quote]far more important, the early-Z discarding should be activated, boosting a lot the performances.
Not using position_invariant will not turn off early-Z. Only writing to the Z-depth in a fragment program does that (among possibly other hardware-specific fragment-based things).[/QB][/QUOTE]Hem true, you’re right. So there should not be any performance variation.

thanks nystep, it is very kind of you to explain how those shaders work.
or rather, trying to… :wink:

Well it’s ok for the 3d texture, i’ve been commenting some vertex program that wasn’t used in the final product :slight_smile: The interaction.vfp gives a slighly better overview of how the shading works. I wonder why 3d light textures were abandonned by carmack. did it require too much texture bandwidth? too much video memory? i’m really not sure putting 1 more txp and 1 mul in a vertex program is really faster and is worth the texture bandwidth gain, but he must have tested…
As for the difference of performance between radeon and geforce, i’ve seen the ARB_precision_hint_fastest in the begining of the fragment program…
From the ARB specification:

However, the “ARB_precision_hint_fastest” and
“ARB_precision_hint_nicest” program options allow applications to
guide the GL implementation in its precision selection. The
“fastest” option encourages the GL to minimize execution time,
with possibly reduced precision. The “nicest” option encourages
the GL to maximize precision, with possibly increased execution time.
So it enables the geForceFX and up cards to run the fragment program with 16 bits floating point or even 16bits fixed point precision… Whereas ATI Cards are bound to their 24 bits precision.
But anyway considering the content of the fragment program nothing seems to require a higher precision does it?

another question about DOOM3 shader. How does it use those heightmap textures in the models? and I can’t find code of calculating specular light in the fragment program, my result seems much worse than the game effect. anyone knows how to do that?

hey! Does Anyonw try to make a demo/ example using interactionR200.vp or interaction.vfp programs?
I had i quick look at them. But I can not find the code of interactionR200 fragment shader because the extension is not supported by the R200. Must try to make one cause the interaction.vfp is more complicated and i dont understand some parts.

Originally posted by Nil_z:
another question about DOOM3 shader. How does it use those heightmap textures in the models? and I can’t find code of calculating specular light in the fragment program, my result seems much worse than the game effect. anyone knows how to do that?
Carmack converts the heightmaps to normal maps and then adds that to the normals of the normal map (if a normal map exists). Check the addnormals “function” with the SDK.

I wonder how and why he does that :confused: BTW, what is the SDK you talked about?

Originally posted by Nil_z:
I wonder how and why he does that :confused: BTW, what is the SDK you talked about?
It’s done to add small detail, that would be a mess to add directly to the high polygon model, that is used to create the normal maps. The height maps are probably all hand painted via PS, or so.
The code to add a height map to a normal map needs two steps:

  1. Create a normal map from the height map. In this step, you take three pixels from the height map formaing a plane and calculate the normal of that plane. This is the normal to encode into the new image.
  2. Add this to the other normal map. After encoding the two normals from the two maps, use the following code to add the normals together:
n0[0] /= n0[2];
n0[1] /= n0[2];
n1[0] /= n1[2];
n1[1] /= n1[2];
normal[0] = n0[0] + n1[0];
normal[1] = n0[1] + n1[1];
normal[2] = 1.0f;
normal.VxNormalize();

Where n0 and n1 are the two decoded normals and normal is the resulting one to encode into the final normal map.

Take a look at the id developer site for the SDK. It contains all the game code (no renderer, network code).

The Doom3 mod sdk:
http://www.materiel.be/logclic/click.php?id=1716&url=http%3A%2F%2Fwww.iddevnet.com%2F
http://www.materiel.be/logclic/click.php…2FDoom3_SDK.exe

why not add the normal info from the heightmap to the normalmap previously into the data?

Originally posted by Nil_z:
why not add the normal info from the heightmap to the normalmap previously into the data?
Adding and renormalize them leads to an average normal. When I implemented this, I realized that it looks really bad :slight_smile:

EDIT: Uh, i should read the post before replying :slight_smile:

I think it was done because of flexibility. One material could have the normal map only and another one would have the same map with added scratches on it.

how to generate that normalmap?
i am thinking about:
get 3 pixels from (x,y), (x+1, y) and (x, y+1), calculate normal from them. but what should i do with the pixels on the right&bottom edge?