i’ve some questions about deferred rendering:
- what is “albedo” in terms of the materials values delivered by “.obj” model files ?
i’ve heard / read about that in several videos / tutorials:
is it a mixture of Ka, Kd and Ks ? … something like (Ka + Kd + Ks) / 3 ?
or is it just Ks ?
my lighting equation:
vec3 Intensity = IntensityAmbient * Ka + (Total)IntensityDiffuse * Kd + (Total)IntensitySpecular * Ks
… where the “shininess” float Ns is already included in (Total)IntensitySpecular
should i discard my current lighting equation and just use Kd for all light intensities ?
… vec3 Intensity = Kd * (IntensityAmbient + (Total)IntensityDiffuse + (Total)IntensitySpecular)
as this tutorial shows:
- assuming that my current lighting equation (using Ka, Kd and Ks) is used, i would first render the scene into the G(eometry)Buffer with the following attributes:
– Kd (+ texture)
– Ks (+ Ns, both with textures if available)
in the second pass, i would render a screen-wide rectangle, and in the fragmentshader:
– discard if no model was rendered at current pixel
– otherwise do the lighting calculation for each light source (directional / point / spot lights)
should i rather replace that calculation and render “spheres of influences” for each point light and “cones of influences” for each spot light finally and blend the results ?
(both sphere and cone as “model” with variable sizes)
- assuming i want to render first everything into the GBuffer, is it a good idea to do the second lighting step in a separate lighting framebuffer ?
i’m asking because i think i’ve read last time that changing the drawbuffer state often isnt very efficient, so i thought i create 2 separate framebuffer objects with constand drawbuffers (GBuffer + lighting buffer)
thanks in advance!!