i am implementing a radiosity processor using hemicube renderings on the GPU and i was thinking about rendering multiple camera views inside the same rendered image buffer (with different quadrants assigned of course).
For example i’ve few texels to process on the same surface. For each of them i’d have to render its hemicube. Normally i’d do this:
[li]set the first texel camera view corresponding to one of the hemicube faces[/li][li]set the destination buffer where the rendering will be written[/li][li]render the geometry (with proper culling based on this camera position)[/li][li]process the other 4 faces of the hemicube in the same three steps above[/li][li]do the same 4 steps for the other texels[/li][/ol]
The idea instead is to render the same face of all these near texels in one batch:
[li]pass to the shader one projection matrix for all of the cameras to use (it’s sure that is identical for all of them). Then pass a modelview matrix array containing the matrices for each of the cameras to process[/li][li]set the destination buffer of a size multiple to the number of cameras you need to use[/li][li]render the geometry only one time. The culling of the geometry will be “broader” to have in the set the geometries inside all the camera frustums considered. Because all the cameras will render the same hemicube face and represent near texels the geometries are almost the same for each of the views.[/li][li]in the vertex shader multiply the geometry data by each camera matrix and separate each result passed to the fragment shader (another array of matrices)[/li][li]in the fragment shader each result will be used to used to render a texel in the buffer of the quadrant assigned to that camera. For example we have 4 cameras, we subdivide the buffer in 2 both in horizontal than vertical and render the 4 final images in those 4 rectangles.[/li][li]repeat the same process for the other 4 sides of the hemicube.[/li][/ol]
Are there advantages to use this approach or is something not feasible for some reason?