Suggestion for unified bufffers and other things

I worked on this idea for a year or so, i finally wrote it down last September, so it would be nice if you would read and reflect on my idea http://www.flashbang.se/archives/275
Discuss if you feel like it.

Interesting! I thought about this slot idea, I called it slot as well and it was very similar, it might mean there is something relevant around here if 2 different mind end up on the same idea.

I haven’t linked it to “uniformed memory” kind of things however. Images are such specific things that the hardware do lot of little thing on them to make them bloody efficient. I am still unsure about such idea how it could be just possible.

Blend shader: I am dying for it too! I even think that the all deferred rendering approach make it even more relevant. Sooner or latter…

Object shader… programmable vertex pulling? I’ll prefer to thing about that as a “pull shader”. and I don’t think an access to the whole array is reasonable on the hardware side of thing. After all, I still hardly believe that D3D11 chips support image and buffer load ans store with atomic operations… crazy thing really! So maybe!

The blend shader is a great idea, and definitely fills a gaping hole in existing functionality. To be really useful though you’d need to extend it a little beyond what you’ve described, as I could see it being potentially great for things like bloom, motion blur, etc.

I think this is an optimistic way to see it. I see the blend shader as a per-pixel operation. “bloom, motion blur” is a per-image operations.

But still, it could do much more than the example here!

First, a word about OpenGL. In the last 10 years, there have been two major attempts to radically alter the API.

Both of them have failed. Miserably.

Radical alterations of the OpenGL API are simply not going to happen. The ARB and legacy applications will not allow it.

Specific comments:

Currently we have at least two or more ways to upload or use various forms of data.
We have vertex buffers, textures, framebuffers, uniform buffers and so on.

This model proposes that all data buffers should be unified in a similar way that vertex buffers are used, well more or less.

First, uniform buffers and vertex buffers are the same thing. They’re just buffer objects. All buffer objects are equal. The only difference is how you use them.

You can use buffer objects as attribute sources for vertex data. You can use buffer objects as destinations for feedback. You can use buffer objects as uniform data. And so on. You could use the same buffer object for all of these.

So buffer objects are already unified in the way you fill them with contents. You don’t access a “uniform buffer” in a different way from a “vertex buffer”.

Second, I’ll assume by “framebuffers” you mean “renderbuffers.” Because FBOs are just state objects; they don’t contain any “data”.

There is a vital semantic difference between a renderbuffer and a texture: renderbuffers (outside of a very poorly conceived NVIDIA extension) cannot be bound as a texture. This is very important, as it allows the implementation considerable freedom as to how to implement the storage layout for a renderbuffer.

Third, and most important is this. Has it occurred to you that there is a reason why buffer objects, textures, and renderbuffers are different? That maybe the difference has a vital meaning that needs to be preserved?

At the very least, buffer objects and textures are not equivalent. Buffer objects are unformatted linear arrays of contiguous memory. A buffer object represents exactly one such array of memory. A texture object contains multiple blocks of formatted memory.

You’re trying to fit a round peg in a square hole if you want to make textures and buffer objects the same.

When you use a buffer object, you the user are fully in control of the arrangement of bytes in that buffer. Even when you use glReadPixels or transform feedback to fill the buffer, you specify the format of the data that will be put into a buffer object. You control the data and you control the size of the buffer.

This is not the case for images in a texture. And that is good. This gives the implementation the freedom to reformat the data to be read most efficiently. If images were stored in buffer objects, this would not be possible.

So instead of binding textures vertex arrays or shaders the normal way one would do it sort of like this

glBind(GL_TEXTURE_SLOT,0,buffer);
or
glBind(GL_VERTEX_SLOT,2,buffer);
or
glBind(GL_SHADER_SLOT,GL_VERTEXSHADER, shader);

And what good is this, exactly? Ignoring the previous point that textures and buffer objects are completely different and fundamentally incompatible constructs, what power does this give me that I did not have before?

Can I bind a buffer object in a shader slot? No? Then why does the API strongly suggest that I can?

A good API makes it clear what is a legitimate action and what is not. In the absence of strong typing, the API should make it clear what the parameters are expected to mean. The function “glUseProgram” tells you exactly what kind of parameter it takes. As does “glBindTexture”. If you pass a texture name to glUseProgram, it’s clear that you have done something wrong.

So not only does this not provide me the ability to do something that I couldn’t do just as well with the old API, it also makes the API more confusing by overloading a single function call to do radically different things.

Also, you’re missing some functionality. glBindBufferRange, for example, or do you expect each uniform buffer to be its own object? How do you set up the vertex format for an attribute? And so on.

probably the most wished for, so much so that people might be a bit disappointed once it’s implemented, but here is essentially how a GL_ONE GL_ONE shader would be written

That is wildly inconsistent with all established GLSL conventions. Shader stage outputs are write only. In every shader stage. If you want to do this, the shader should get inputs for the source and destination colors. It should not suddely allow you to read from previously write-only variables.

Note that only simple arithmetic could be preformed inside a blend shader, only input from the vertex shader and uniforms are allowed which means no textures, if you want texture data you have to pass it from the fragment shader.

Then what’s the point of proposing a blend shader stage at all? If you can’t do arbitrary computations at the blend level, if this is just a complicated way of setting the blend parameters, what’s the point?

It seems strange to be so grandiose with proposing a massive alteration of the basic API of OpenGL on one hand, then proposing an incredibly conservative blend shader stage.