I’ve recently been playing around with a lot of “advanced” techniques for per-pixel operations (using nVidia’s register combiners - but I believe that ATI’s newest hardware has a similar extensions). As I began to look at some example code for per-pixel lighting I noticed something ludicrous (please note that the following description is loose).
In the example (which was fairly representative) vertex normals of a model are used as texture coordinates, which index into a cubic environment map whose only purpose was to normalize these vertex normals - I mean texture coordinate, and encode them as an RGB triple.
Then, within the register combiners (per pixel shader) the dot product of the texture 1 color - or rather unit normal, and the light direction - actually the “auxiliary color” or something is taken to generate a light intensity.
It seems to me that OpenGL has a lot of kinds of four-dimensional values - colors, vertices, normals, texture coordinates. Various facilities are provided for operating on each of these kinds of values, yet no one format allows for all the operations that the others do.
Having to use a texture unit just to generate interpolated unit normals for per-pixel shading seems wastefull. it seems that much could be gained just by unifying the operations on colors, texture coordinates, and geometry to a smaller, but more comprehensive set of operations.
The only real limitation to this is the different ranges on these kinds of values ([0,1] vs. [-1,1] vs. R for example). But one need only look at OpenGL shader or John Carmack’s .plan file to see that there would be great benefit to having floating-point color components, and extended range throughout the GL pipeline.