Global Illumination Question? :D

Hello girls and boys!

Looking at the Real Time Global Illumination, by NVIDIA GPU Gems. http.developer.nvidia.com/GPUGems2/gpugems2_chapter38.html

This approach revolves around depth peeling and a number of parallel projections applied to the scene, to simulate light bouncing around the scene.

However, I am having trouble understanding how it would outperform traditional raytracing techniques, where we trace each ray around the scene. I elaborate on this below:

Assume that we have a camera inside of a room, with several windows and a table in the center. The room is located in a building, so there are other objects behind the walls which are invisible to the camera. Even with the heavy occlusion culling, assume that those objects will still end up having to be drawn (in vain).
First of all, just as usual, we render all of the geometry via deferred rendering pipeline, obtaining the positions, normal and color of all the visible fragments in the scene.
Now we need to light them with our Global Illumination!

GPU Gems suggest the following:

  1. Pick several directions (for example 16 directions, in a sphere-like distribution).
  2. Use the first direction to project all of the scene’s geometry onto a texture, one by one, back-to-front.
  3. Every time we project any of such objects, we check if their fragments are closer than what was rendered to this texture in the previous draws. This follows a notion of depth peeling.
  4. After every single object, we need to take all of the main camera’s visible fragments and transform them into the space of the projected object’s fragments.
    If some of the fragments of that new object happen to be in front of our camera’s visible fragments, object’s fragments contribute to the corresponding visible fragments’ colors. This can be done based on the distance between them (in the world space for example), with less color contributed the further the fragments are apart.
    Like that, we are able to assemble a screen-space global illumination buffer (the way the camera would see it). It can then be added onto the diffuse colors of fragments visible to the camera, to apply our Global Illumination.

However, there are 3 problems I see with this:
a) As mentioned in step 4), every time an object is parallel-projected onto the texture, we need to compare all of the main camera’s visible fragments to that new object’s fragments. It is because we don’t know if, say table in a room should affect specific part of the visible ceiling or floor, etc. Hence, we have to project the ceiling and the floor’s fragments into the space where table’s fragments were projected. Moreover, the table might even be offscreen… For instance, traditionally, in deferred rendering, a light mesh identifies all the visible fragments that will be affected by light, however, here it’s not possible. Even if we do specify the table’s fragments relative to the camera, table might simply end up offscreen.
This leads to a necessity of parallel projecting all the visible pixels, to see if they are affected or not, just as told in 4) .

b) the size of the texture receiving parallel projections must be huge if the main camera is looking towards a stadium.

c) Even if we do accept the necessity of 4), and parallel project all the visible points every time we project the objects, we still only achieve 1 bounce of light (from visible fragment, to their closest fragment, determined with depth-peeled parallel projections of objects). But why bother with such a time consuming approach if there are already other faster real-time approximations available? www.vis.uni-stuttgart.de/~dachsbcn/download/sii.pdf

Could you please explain to me if I missed something important in a, b or c?

Thank you