Hi
I think it would be a good idea if dynamic cube-map generation could be completed in a single pass, this could be achieved if the cube is created in world-coordinates, so you could
-
setup the cube coordinates
-
render the scene, letting the gpu calculate the projected vectors from the center of the cube to the scenes vertices (only a simple sub required) and with using the same currently implemented face determining formula will be able to write the texels to the correct location and face (it’s just the same as drawing a cube with vectors lookups for the texture coordinates but in reverse).
with pbuffer support (although it is a pain setting up) depth can still be stored and tested.
an added advantage could be realized by means of
glEnable(GL_WORLD_CUBE_CREATION) - mmm somthing like this
glEnable(GL_CUBE_HIDDEN_SURFACE) - reflective (would require however a glstate to hold eye coordinates)
or
glDisable(GL_CUBE_HIDDEN_SURFACE)
glBlend(GL_CUBE_HIDDEN) - could be used for refractive blends
would require glstate for refractive index to be stored however
but if GL_USER_BUFFERn was added to the pixel format (as huge video memory is available) similar to depth buffer (which could by default be GL_USER BUFFER0), could be extended to the fragment program for writing to or more conventionally in the glBegin/End pair something like
glBufferf(GL_USER_BUFFER1,val)
or for a range of buffers if pixelformat excepted GL_USER_BUFFER1= rgb8,rgb16 or r32 etc (creating 1->n buffers)
glBuffer3f(GL_USER_BUFFER1 (base buffer),a,b,c) writes to buffers 1->3
this would greatly increase gl flexibility for programmers, ie you could store the refracted index while the scene is being drawn or suchlike.
their are a few more advantages to world cube creation, one being any stationary static objects do not require transformation, there is also no need for a projection matrix, only model and texture possibly required, and as only one pass is required, fps is automatically increased.
few exhausted
seasons greets
dasraiser