The code uses MRT for baking camera space data into 4 textures.
In the light application stage you unproject the pixel into the world space, compute lighting based on position,normal & coefficients (sampling from the textures you have) and contribute to the pixel color. This, of course is done in a fragment shader.
I don’t get it, isato. Why are you asking about Deferred Shading in GLSL here if you don’t know the basics: sampling from the texture? Draw a full-screen textured quad first, at least.
No, I believe Deferred Lighting is also referred to as Light Pre-pass. Google it, possibly in combination with Wolfgang Engel. Some good stuff on it in SIGGRAPH 2009 IIRC and also on his blog.
Deferred Shading: store material attributes on framebuffer, and then come back with shader lights and light them.
Deferred Lighting: store light attributes on the framebuffer, and then come back with shader materials and light them.
I am trying here to build my G-Buffer. This is an example i found but i don’t understand the color is stored in gl_FragData[2]. This example does not work without a texture on each geometry?
I was thinking if my geometry was colored using for glColor.
Pretend that I have a red quad without any texture, just colored with glColor. That will not go in this G-buffer?
IIRC, you usually store your material albedo(s) per texel in the G-buffer. Whether it comes from the vertex color, 1 or more textures, or both, makes no difference to the technique. It’s all in how you write the shader and populate the gl_FragData[i] outputs.