ATI Fragment Shader Program Question

I’m just getting started with fragment programs. I’m trying to figure out if I can access a normal “map” generated by OpenGL from my glNormal3f() calls, or if I have to generate this map for myself.

In examples in the spec for ATI_text_fragment_shader I see stuff like

SampleMap    r4, t5.str;    # Sample normal map

but I’m unsure of exactly what this does. Also, how can you tell (or assign) what’s in t0, t1, t2, … ?


Keenan Crane

glNormal3f() defines a per-vertex normal. You should use them in vertex programs …

SampleMap r4, t5.str samples a filtered texel from a texture unit (i think texture unit 4)into the register 4 using texture coordinates t5 (from texture unit 5).

You should generate a normal map texture and fetch per-fragment normals using SampleMap calls.

t0, t1, t2, … contain texture coordinates.

Hope that helps,

That does help, thanks.
My question now is - what’s the best way to generate a normal map? I know there are all kinds of tricks like using a preprocessor program, generating one in Photoshop, etc., but I’d like the most modern and automated way of doing it, preferably without importing a texture file (i.e., generating the map within the program).

Also, I’ve come across the commands

glBindTexture(GL_TEXTURE_2D, myTextureID);

Will these assign specific textures to specific registers (in this case myTextureID to register 0), or do I have to do something else?


Keenan Crane

> glActiveTextureARB(GL_TEXTURE0_ARB);
> glBindTexture(GL_TEXTURE_2D, myTextureID);

binds a 2D texture with texture id myTextureID to texture unit 0. Sample from this texture in a fragment program using SampleMap r0,t0.str

You will find plenty of information on the latest techniques on:


Normals maps can be generated in many different ways, and the method you choose is likely to depend on what you are attempting to accomplish.

Off the top of my head, three primary categories of normal map generation exist:

  1. Height field derived gradients (presently most common)
  2. Maps derived from a high detail model (up and coming)
  3. Volume gradients (mainly visualization)

The simplest thing I could tell you to do to algorithmically generate one is to create one that will make a flat plane look like a sphere. The procedure is as follows:

  1. Set up a viewport the size of the intended texture.

  2. Clear to black

  3. Draw a sphere with the normals mapped to colors as follows:

r = n.x0.5 + 0.5
g = n.y
0.5 + 0.5
b = n.z*0.5 + 0.5

(This can be done using the shader)

  1. CopyTexImage this into your normal map texture.

  2. Draw with it.

For a more complex example of this sort of thing, the ATI developer relations sight has a tool that generates normal maps via ray tracing. You can find it here:


Thanks for the nice post Evan,

but generating normal maps from volume gradient?
This means creating a 3D normal map?
who does this and for what?

V-man - everybody does this :wink:

Seriously, it’s a common technique in volume visualization known as volume shading. Pre-computed gradients were the only way to apply external light source to volume graphics before the GeForce FX and Radeon 9700. Now, with more texture fetches, you can also compute the gradients on-the-fly.

You can find some very impressive examples of shaded volume graphics on Joe Michael Kniss homepage: