I’m trying to make an overlay that distorts the normal rendered image the way a gas mask might. (slightly warped near your peripheral vision.)
I noticed Doom 3 uses a normal map to perturb texture coordinates for effects such as warped glass and heat waves.
I wrote some small post process for my current application that renders the normal frame to a texture, maps the window coordinates to [0,1], and fetches a normal from a map to alter those coordinates. It basically mimics Doom 3’s glprogs/heatHaze.vfp, without the scrolling and variable deformation scale.
This works fine, except for the fact that the final image isn’t exactly the same as the rendered image if I use a normal map that points completely ‘up.’ (0,0,1) It’s ever so slightly off.
I think this is because there’s no way using normal GL_RGB8 textures to create a normal map that really points completely up. Without using signed or floating point textures, normal channels are scaled and biased to move them from [0,1] to [-1,1]. The problem is, there’s no way to specify .5 (mapped to zero) in a channel. Neither 127/255 or 128/255 maps to .5 exactly. This results in slight perturbations where I expected none. (I am generating normal maps using Photoshop’s height-map to normal map filter.)
I could use the alpha channel to mask off perturbation, but I want to make sure my thinking isn’t flawed before I proceed. It’s late, and maybe I’ll see the error of my ways tomorrow.
To make that kind of effect you should use DU/DV normalmap. If you don’t know what is it, it’s just a different kind of normalmap that instead of giving you normals, gives you distorcion values.
Photoshop can do this with the NVIDIA plugin.
I know there are alternatives; I’m just surprised, and slightly annoyed by this limitation.
Your thinking is right CatAtWork. With RGB8 there is only 256 distinct values per component of the normals in your normal map. Depending on your application this will often appear as blockiness. Moreover, it can become very evident when you try to represent the scalar value 0. Whether you try to represent 0 as 127/255 or as 128/255 you often get systematic, noticeable shifts in your rendered images compared to what you would get when passing in a “true” (0,0,1) normal vector.
An 8bit DuDv representation of your normal map still has the same issues with the representation of zero.
You may want to try using the RGB16 format. Of course, from a theoretical point of view this does not solve your problem. However, for all practical/visible purposes it has fixed both the shifting and the blockiness problems for the scenes I have worked on. Yes, of course, you pay in terms of the size of your normal map if you go this route.
If you have one of the new graphics card like the 6800 and don’t have to worry about things looking good on older cards you can always try floating point normal maps. Filtering support for these kind of textures is limited (eg GL_NEAREST only) on most other (other than the 6800, that is) cards and blockiness is often visible.
Another issue that is coming up more and more is numerical error propagation in vertex and texture shaders. In for example your RGB8 normal maps, the representation of 0 has an error of 0.5/255 up front. The numerical error propagation must be watched carefully, at least for complex shaders.
I hope I made some sense.
Who says you have to use 0.5 * x + 0.5 though? Why not use 127/255 * x + 127/255? Zero will then be exactly represented as 127. It will only make use of the range 0 to 254, but I’d be surprised is that would be an issue.
As shaders get more and more sophisticated this will be less of an issue because they’ll use different representations and/or higher precision. This is already solved: