Deferred Rendering

hi everyone,

i’ve some questions about deferred rendering:

  1. what is “albedo” in terms of the materials values delivered by “.obj” model files ?
    i’ve heard / read about that in several videos / tutorials:
    Learn OpenGL, extensive tutorial resource for learning Modern OpenGL

is it a mixture of Ka, Kd and Ks ? … something like (Ka + Kd + Ks) / 3 ?
or is it just Ks ?

my lighting equation:
vec3 Intensity = IntensityAmbient * Ka + (Total)IntensityDiffuse * Kd + (Total)IntensitySpecular * Ks
… where the “shininess” float Ns is already included in (Total)IntensitySpecular

should i discard my current lighting equation and just use Kd for all light intensities ?
… vec3 Intensity = Kd * (IntensityAmbient + (Total)IntensityDiffuse + (Total)IntensitySpecular)
as this tutorial shows:
http://ogldev.org/www/tutorial35/tutorial35.html

  1. assuming that my current lighting equation (using Ka, Kd and Ks) is used, i would first render the scene into the G(eometry)Buffer with the following attributes:
    – position
    – normal
    – Ka
    – Kd (+ texture)
    – Ks (+ Ns, both with textures if available)

in the second pass, i would render a screen-wide rectangle, and in the fragmentshader:
– discard if no model was rendered at current pixel
– otherwise do the lighting calculation for each light source (directional / point / spot lights)

should i rather replace that calculation and render “spheres of influences” for each point light and “cones of influences” for each spot light finally and blend the results ?
(both sphere and cone as “model” with variable sizes)

  1. assuming i want to render first everything into the GBuffer, is it a good idea to do the second lighting step in a separate lighting framebuffer ?
    i’m asking because i think i’ve read last time that changing the drawbuffer state often isnt very efficient, so i thought i create 2 separate framebuffer objects with constand drawbuffers (GBuffer + lighting buffer)

thanks in advance!!

  1. Albedo is generally the diffuse part. But on this tutorial, they also use albedo for the specular part. I personally never use this term, which I found troublesome…

  2. You can. As an optimization point.

  3. I’m not sure if I understood you, but deferred rendering is generally done using Multiple Render Targets (MRT).

i’m using MRT: currently i’m using 2 framebuffer objects (beside the default framebuffer):
the first is the GBuffer, containing 6 textures (position/normal/Ka/Kd/Ks/Ns)
the second framebuffer contains 3 textures, i thought i render the different light parts (ambient/diffuse/specular) into separate textures
lastly, i mix the light parts into the default framebuffer

So you have your FBO with all the render targets filled. Then, what one generally does is simply to render in a fullscreen-quad on the screen.

You can of course do the last step in another framebuffer, but then you’ll still have to tell to render this result on the screen, so again use a fullscreen-quad, using the texture of your last FBO. The last part is generally done with a single shader call, looping over all the lights.

See this for example.

That’s a reasonable first-step when implementing your lighting pass. But if you have a non-trivial number of light sources, you may run into fill problems and/or limits on the amount of uniform data you can feed into your shader (depending on how you’re feeding in your light sources). Then you optimize this to something less brute force than full-screen quads.