(kind of) Deferred Shading


I was trying to come up with a way to do deferred shading in Geforce 4 MX class software. I kinda got a way of doing, tough I don’t know if it is possible, feasable or even if it’s totally wrong, so I’d like opinions on this one…

Basicly, you’d render a scene 3 times to 3 different textures: one with a regular texture map (no lighting), one with normal map and one with a “position map” which encodes the position of each pixel (I think this could be done with a vertex shader using a float-poiting texture - kind of a fat buffer).

Then, for each light, you have to make two separate texture combines: you do a subtraction of the position map and a full-screen quad enconding the light position, so you get a light->pixel map. Then you do a dot3 with this map and the normal map (probably with a normalization cube map entering here somewhere). The result of this operation is sent to the accumulation buffer (or the frame buffer using an additive blending function) and finally, after all lights are computed, the texture map is multiplied by the accumulated light map and sent to the framebuffer.

So, is this a good idea, a bad idea or a really stupid one?


My understanding of deferred shading was that it minimizes the cost of per-fragment operations at the cost of bandwidth. If this is true, then it doesn’t seem like it would work too well for your MX which has no fragment programs or spare bandwidth. Also, I don’t think that MX render targets have enough precision to encode position, although you implicitly gain it through the depth buffer.

I guess the other point is to seperate surface from lighting properties, which your technique appears to do. You don’t necessarily need to do full-screen quads since you can bound them in screen space (to save some fill) and you also don’t need to use the accumulation buffer; you can just do additive blending.

I didn’t really go over your algorithm too closely (about to head off to a meeting), but maybe this will give you something to think about.