How does ID do that??????

Ok, I am currious. ID builds there models to ULTRA high resolutions. Then they rebuild them to Low Poly, but some how retain the Detail, and quality of the original model??? HOW??? All I can figure is that they some how use the normals of the original model, to “fake” a layer on top of there low poly model. Once again HOW??? Anyone have any idea as to how they do this??

Like everybody else, by using displacement maps…


That would seem to be the key idea. I think that they use some plugin or some specialized software for generating their normals maps.

I think that you would have to combine this method with an algorithm that intelligently simplifies the model (involves finding the gradient)

Doing this right is an art, since they manipulate the model to fine tune everything. Remember that the model will be animated as well.


Displacement maps? Nah, there you’re a “bit” wrong informed deepmind. Displacement mapping is only supported by the new Matrox Perhelia and even on that card it will NOT be supported in the Doom 3 engine, because it’s incompatible to volumetric shadows. Same about ATI’s TruForm, because both modify the polygons flushed.

They use a similar form to PolyBump. Go to there you can download a demo of Polybump (needs GeForce3+ or Radeon 8500 or higher).

In general it works about following way:
-they check, which polygons are not that important and calculate a low resolution mesh first of all, but remember, which of the polygons taken away were connected to which ones remaining at the end. Demos for calculating a mesh down you can find everywhere through the internet
-then they create a dot 3 bumpmap and fill it with the polygons taken away (theirs normal vectors)

And if you display it then with DOT3 bumpmapping the result model looks very similar to the original, because the shading is looking so terrible equal, even if you move the light. Of course it looks far worser than the object with 100 times as many polygons, but nevertheless it is really impressing which similarity they still reach nevertheless, no compare to an unbumped low poly model. The only limtation they have is that deep structures are not bumpable. But all things like cables, pores and so on you can totally replace with bumpmaps… and save a bunch of polygons this way.


[This message has been edited by BlackJack (edited 07-26-2002).]

blackjack. while they not use the displacementmaps as displacementmap in the engine, they actually generate the displacementmap. thats the same as a bumpmap. for generation. so what? (okay, they should have said bumpmap…)

I thought that using Dot3 was the method, too bad its not supported by all cards on the market. I am currently using an emboss method to calculate my bump maps. (looks descent) But I hope as the technology changes, so does the ability to use bump maps.

davepermen, you are wrong.
A displacement map stores a height-information. A normalmap stores the direction.

A bumpmap is usually a heightmap, which must be converted to a normalmap which is then converted into objectspace (if you want to make dot3-bumps)…ok, i know, i could also convert my lightvectors into tangentspace, but that’s not the point. the point is that you have to do some 3d-space-conversion.

so a bumpmap has 1 heightvalue.(from which you have to evaluate the normal-value. in a preprocess step or at runtime)
and a normalmap has 3 values with the(normalized) direction vector. And in the special normalmapping-case in doom3(unique mapping for every face) you can also store the normals in objectspace, so you don’t need to do any 3d-space-conversions, you don’t have to interpolate/normalize the facenormals… just take them, and DOT them with the lightvector(in objectspace of course).

so normalmaps have nothing to do with displacementmaps. the techniques are completely different and the sourcedata is also not the same.

no one talks about normalmaps. bumpmaps are the same as displacementmaps. that you need for dot3bumpmapping the normalmaps, i know very well. and i know all the rest to do todays perpixellighting. i know as well that its not needed to store normalmaps at all, if you don’t want… for dot3bumpmapping, yes! so your statement is wrong in fact…

i know its just nitpicking. but a displacementmap and a bumpmap is used for the same thing. displacementmapPING and bumpmapPING are different things. one does displace with the HEIGHTMAP of the mesh, the other one does shade with the help of the HEIGHTMAP. (and in most cases today you first convert the HEIGHTMAP into a more proper format: a NORMALMAP )

but to store, a heightmap/displacementmap/bumpmap is less memory-eating. so they use that for sure

yeah, ok i understand what you mean.

at the end of the last year i did some tests with my new gf3 and one of the tests was to create normalmaps of hiresmeshes and map them
on a lowres-mesh. I figured out, that using normalmaps (read: reusing the surface direction-information for lighting) is much more accurate than using bumpmaps.
that is so because a bumpmap must first be converted into a normalmap. and in this process you have to take the neighbours of a given pixel into account which blurrs your final normalmap down.
a 256x256 bumpmap results in a blurry normalmap. a bumpmap is also usualy 8 bit, which gives you a really poor precision, when creating the normalmaps.(i assume you know how…based on the differences from the neighbours)

a 128x128 normalmap don’t need to be blurred and contains allready the correct normalinformation which looks in most cases better than a 256x256 heighmap.(and is smaller than a hieghtmap of the double size)

that’s my own expiriences with this method, and because of them i still say, they are using normalmaps and not heightmaps.

Interesting Adrian. So computing the normal map directly you prevent blurring, and improve precision dramatically.

I’m pretty sure that must be the way they’re doing it.

Cheers for that!


improving precision:
use 16bit xy components only for bumpmaps.
blurring? not really, i’ve currently doing some pixelscreen bumpmapping (means a simple 2d effect) and there i can see it does NOT blur at all
thats my expirience… your’s can be different…

The technique used to recover detail for simplified meshes as normal map (used by id in doom 3) is not new at all. We are using it in our lab since 1997. It was presented 4/5 years ago at ieee visualization conference.
look at
for a detailed explanation of how to do it (some downloadable papers too).

Davepermen: No, they don’t use displacement maps at all, not at least as any step between. Why should they, it’s senseless and also more unexact.

What’s theirs input?
-the highres polygon taken away
-the lowres polygon to which the highres one is bound

With this they can relative easily calculate the region in the normalmap representing the highres polygon. They just need to cycle it back so that the highres polygons’ normal is 0,0,1 and can then straightly store it’s interpolated normal values of the vertices in that map. I don’t see any point for using displacement maps for PolyBump at all…
But I guess that FarCry as well as DOOM 3 will calculate displacement maps as well to support hardware which doesn’t support DOT3 bumpmapping. (offset bumpmapping)


i think i can remember, they don’t using any geometry-deforming LOD-methods(truform,HOS or Displacementmaps) because this would screw up their stencil volumes…

That’s what I already said in my first post Adrian .

you are right, shame on me.


While your usage of “bump map” for the grayscale height map is the traditional CG usage, these days real-time graphics people actually more often mean “normal maps” than “height maps” when they say “bump map”. It’s all because of “DOT3 bump mapping” (which uses normal maps).

Sometimes, it’s amazing how well humans can communicate, despite the continual confusion caused by human language :slight_smile:

Isn’t normal mapping also called “normal perturbation” and “dot3 lighting”. There are a lot of terms I see and seems like they all mean the same thing. How else are you gone do basic bumpmapping?

It surprises me that Doom3 is running so fast with stencil shadows on top of all the other effects. It’s got to be movie sequences, right?
There are other modern games that don’t have half the special effects of Doom3 and they don’t run too fast on my machine. Try Serious Sam 2 to see what I mean. Damn thing uses 100MB of RAM and it doesn’t look much better than the first version.


it’s amazing how well humans can communicate, despite the continual confusion caused by human language

Hm… jwatte… you anyway sound for me like any Alien battle ship commander speaking the way “let’s destroy this mental underdeveloped human beings” , hehe. But yes, you are right, many words have many meanings… arghl… you make me nightmares again about my old Latin studying times .


Originally posted by V-man:
Isn’t normal mapping also called “normal perturbation” and “dot3 lighting”.

Eh? To perturbate means to ‘offset’. That’s a different trick.
Bump mapping to me is dot products with a normal map. Why are people still talking about the offset trick in this day and age, and on this forum? It looks crap - forget it. On hardware incapable of dot3, don’t even bother trying to emulate it.