AFAIK, right now you have to send stuff 6 times to the hardware if you want to render reflection cubemaps (or for environment lighting, shadowing or GPGPU purposes such as offline generation of lightmaps).
Wouldn’t it make sense if you could send the data once, and have OpenGL rasterize 6 images simultaineously? Idealy, you would be able to set up an arbitrary number of cameras+framebuffers, so it is not just for cubemaps (kind of like stereo views).
Just a thought.
There is of course the issue of handling culling.
I think that would be possible, although requiring some carefull consideration.
API changes would be little - cube as render target and instead of glFrustum you would use other function (glCubeFrustum?) to specify zNear and zFar. Instead of clipping polygons to screen there would be splitting between cube faces. There would probably be some issues (gl_FragCoord in shaders, glVievport, glScissor, etc.). Also stencil and accumulation buffers will require special formats for this (something like 6:1 or 3:2) and depth would be of course depth cubemap.
I see one problem here - the perspective projection - this would probably require some additional, custom stage(s) in vertex processor. I just can’t think of a single projection matrix that would do 6 different perspective projections.
It would probably have to work this way, that polygons get split into parts visible on each cube face before projection matrix is applied and projection would be calculated in each face’s local coordinate system.
Just some thoughts…
I think it’s much simpler than that. Just make it like MRT, but with multiple outputs in the vertex program.
The vertex program outputs six different vertex positions (perhaps other attributes as well), and these six positions get seperately interpolated and rasterized on different render targets.
The question is, do we really need this feature, or is a more generic functionality already covered by the upcoming geometry program extension?