Deferred Shading and Custom Shaders

Hello,

I have some questions that might be simple to answer but I have some concerns regarding the classic “Deferred Shading” for lighting purpuses while using custom shading implementations

. Regarding Deferred Shading are this the main steps:

  1. Apply Shader that outputs vertex Position and render the full scene to a texture
  2. Apply Shader that outputs vertex Normal and render the full scene to a texture
  3. Apply Shader that outputs vertex Base Color and render the full scene to a texture
  4. Apply Shader that outputs vertex z buffer and render the full scene to a texture
  5. …Apply any other shader that outputs [Anything else] and render the full scene to a texture
  6. Apply Combining Shader and do the math for each fragment and render to Framebuffer

Questions:

  • The render targets must be textures? so should I need to use 1 texture for each stage? in this case 5 textures (4 + 1 final)?
  • The least one should be a quad filling the entire screen with the combining shader bound?

Are these the essencial steps on implementing deferred shading on a simplified point of view?

Another question is: By using this method, if I wanted to apply a custom shader to a specific element (lets say a special effect such as vanishing and/or blinking, etc…) would I have to adapt the step 2 to add the effect to the Color Buffer or will I have to add another staging to the process?

Regards,
Filipe Sequeira

There’s no need to output the position; it can be reconstructed from gl_FragCoord.xy and the depth.
Also, you would normally write all G-buffer layers from one render pass; OpenGL 3.0 and later support at least 8 colour attachments plus depth.

It isn’t necessary to use deferred rendering exclusively. You can render additional primitives once you’ve processed the G-buffer to a single colour buffer. E.g. translucent primitives can be rendered afterwards.

Processing is typically done per-light, i.e. you render the area of influence for each light. The results are combined using additive blending.

The main point of deferred rendering is so that you aren’t calculating the lighting for every light for every fragment when each light only affects a small proportion of the scene. If you only have a small number of lights, there is less benefit to using deferred shading. There may still be some benefit from not needing to calculate lighting for fragments which are overdrawn by subsequent primitives, although the early depth test optimisation can achieve similar results, possibly better (fragments discarded by early depth tests skip the diffuse map lookups).

Thanks, so basically, I need to know each light’s area of influence.

but what if I have a set of lights that have a (theoretically) infinite falloff? Should I limit it regardless with a threshold?

and abou the whole process, you said that GL supports 8 colour attachments plus depth, how can I store this values for later usage? in a single Texture? on a buffer?

What “values” do you mean? He’s saying that instead of doing 4 passes over your scene where you render to 1 texture each, you do 1 pass that renders to 4 textures at the same time. Or rather, 3 textures, because as stated, you can reconstitute the position from other information.

That is, your shader outputs normal, “base color”, depth, etc all at once.

That’s exactly my question, how can I render/pass/generate 4 textures at once in the same shader?

Bind textures to different attachments (GL_COLOR_ATTACHMENTn or GL_COLOR_ATTACHMENT0+n), associate attachments to draw buffers with glDrawBuffers, and declare multiple output variables for the fragment shader (or, in the compatibility profile, use gl_FragData[] instead of gl_FragColor).

It doesn’t matter if they have “theoretically” infinite falloff; if there comes a point where the effect of the light is “drowned out” by closer lights, you can ignore it. If they have practically infinite falloff (i.e. their effect is noticeable across the entire scene), then render a full-screen quad. But if you have many such lights, calculating their effect on each fragment is likely to be expensive.

If they’re static, consider pre-computing light maps. For static objects, this is just a texture which behaves like an emissive map. For dynamic objects which are far enough from the lights that the relative position can be treated as constant (i.e. the lights are directional rather than positional), you can generate a cube map, and the diffuse lighting is just a lookup into that map with the world-space surface normal. For dynamic lights or for dynamic objects which are close to the lights, you have to do the calculation for each light for each fragment each frame.

Thanks a lot for having the time to help me out, whle I’m recently joining this community.

Best regards