Unify ALL Buffers

Right now, in OpenGL 1.whatever, we assume several different buffers… frame buffer, depth buffer, texture memory, stencil buffer, … My suggestion is to have only one kind of buffer, treat them as a memory source and manage buffers like texture objects.

For example, when we initialize GLUT or …, we can say, we want two buffers, buffer1 is of size GL_RGBA per pixel, buffer2 is of size GL_FLOAT per pixel… and we can do the same job for texture but texture is not per pixel.

Then, the programmer can link various buffers for different usage, say, I want to use buffer1 as the frame buffer for rendering and I want buffer2 for depth buffer during rendering and I want to show the buffer1 to the actual screen when the rendering is done.

At the same time, these buffers can be unified with pixel shader stuff so that users can access these buffers during the pixel shader we can know these values…
In this way, buffer1 acts as the frame buffer and it can act as a texture source later on in the pixel shader.

In this sense, buffer is just some memory and the most important thing is that there is no type bounded to buffers, the way we link them for different usage and how we interpret them (FLOAT/RGBA/…) is more important.

Furthermore, we can treat the texture stuff in the same way too. Is it possible?

You’re mixing all up!!!
The Framebuffer consits of the DEPTH the STENCIL the COLOR the ACCUMULATION the AUXILIARY and other buffers.
They’re alredy unified. What you probably mean is, that you can use aver part of the framebuffer the same way, say use the depth buffer as a texture using glTexSubImage*
In OGL2 this is obsolete, since you’ll get full control over buffer access.


The Framebuffer consits of the DEPTH the STENCIL the COLOR the ACCUMULATION the AUXILIARY and other buffers

That is utterly wrong.
The framebuffer is only the buffer where the destination image is rendered to, they are not unified, if they were, then we would probably have hardware accumulation buffers, same as hardware stencil buffers, but we dont.

It sounds a good idea in principle, the only thing is most of this stuff relies on platform dependancies to setup right. You cannot currently setup an OpenGL window in Windoze without accessing the Windoze API, and this kind of stuff should be kept seperate from OpenGL. Perhaps OpenML would be more appropriate for this stuff?