Hello,
I was trying to come up with a way to do deferred shading in Geforce 4 MX class software. I kinda got a way of doing, tough I don’t know if it is possible, feasable or even if it’s totally wrong, so I’d like opinions on this one…
Basicly, you’d render a scene 3 times to 3 different textures: one with a regular texture map (no lighting), one with normal map and one with a “position map” which encodes the position of each pixel (I think this could be done with a vertex shader using a float-poiting texture - kind of a fat buffer).
Then, for each light, you have to make two separate texture combines: you do a subtraction of the position map and a full-screen quad enconding the light position, so you get a light->pixel map. Then you do a dot3 with this map and the normal map (probably with a normalization cube map entering here somewhere). The result of this operation is sent to the accumulation buffer (or the frame buffer using an additive blending function) and finally, after all lights are computed, the texture map is multiplied by the accumulated light map and sent to the framebuffer.
So, is this a good idea, a bad idea or a really stupid one?
Cya.