why different semantics

For a texture you need to generate an id then bind that id in order to attach an image.

glGenTextures(1,t);
glBindTexture(GL_TEXTURE_2D,t);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB8,…);

for OpenglShadingLanguage you dont need this.

Handle = glCreateProgram();
ShaderHandle = glCreateShader(GL_VERTEX_SHADER_ARB);
glShaderSource(ShaderHandle,1,str,len);
glCompileShader(ShaderHandle);
glAttachShader(Handle,shaderHandle);
… and so on.

Thats very nice since its not disturbing the stats in any way, so you dont need to store the current program and then return to that if you just need to prepare a program.

But then we have FBO that returns to the texture way.

glGenFramebuffersEXT(1,FrameBufferID);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FrameBufferID);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_COlOR_ATTACHMENT_0_EXT,GL_TEXTURE_2D,textureID,0);

and thus force a change in the states just to prepare the buffer.

Cant we stick to one of these sematics ?

When they promoted glslang to the 2.0 core, they reverted it back to the texture-object standard. For good or for ill…

Sure?

I can see some changes to Gluint and som namechanges, bu i can still attach a shader without binding the program first. The whole program creation is ‘offline’ from the statemachine since you in all functions send in the program id

or have i missed something in the extension -> core conversions?

There was quite a bit of debate about this back when the 2.0 spec was up for a vote. The (rough) consensus was that the “texture object” model of using Gen/Bind/Delete was the prefered way to go. However, due to time constraints and the desire to maintain compatability with existing GLSL applications, the minimum set of changes were made to the shader extensions to get rid of GLhandle, etc.

You can see some of the discussion about this in the June 04 meeting minutes .

I believe that had the issue been driven more strongly when the ARB_{vertex,fragment}_shader extensions were being developed, they probably would have used that model from day one. I also believe, though less strongly, that had consensus been reached a couple months earlier, the GLSL API in OpenGL 2.0 would have been re-designed to look more like the texture API, the occlusion query API, etc. I know that at least a few people in the ARB, myself included, would have prefered that.

The cold, hard reality is that we really wanted to finalize the spec in time for SIGGRAPH 04. Given that and the desire to cause as little pain for developers already using the ARB extensions as possible, we made a compromise. I felt at the time (and still do) that we made the right choice.

The cold, hard reality is that we really wanted to finalize the spec in time for SIGGRAPH 04.
While personally I’d prefer if the original glslang mechanism were the standard OpenGL object handling solution, I don’t understand this particular logic. I mean, if you’re going to do something that’s supposed to go into the spec (to some degree, forever) rather than just an extension, then do it right. What does SIGGRAPH have to do with anything? It’s not like people are going to immediately start using GL2.0 at the end of SIGGRAPH or something…