I just thougth that OpenGL as we know ist is really antiquated… it is build around a structure that is no longer used, an merely serves as a data passing api (passing data to a graphics chip). OpenGL was built around a certain model, which consists of vertices, faces, normals, textures, light sources and per-vertex-lighting, and hardly any of it is used any more in high end graphics… OpenGL is merely used to pass data via texture coordinates (etc) to the graphics chip, where complex processing is performed per-pixel, but this has nothing to do with the original model opengl once was built around… So, as I said, it seems to me that this model is no longer used (and is no longer useful) and OpenGL is depraved to being a simple data passing api, or rather, a graphics api most of which is not used, and some parts are used for passing data to the graphics chip. Isnt’t that weird?
Unless you’re using the GPU as a general purpose processor, there is still symantic meaning in vertex attributes. Position, Color, Texture Coordinates. While yes, we now have the option of passing all kinds of data with entirely different symantic meanings, if you’re rendering, you will probably still have some kind of Position, and likely a texture coordinate or two.
Now, as for the GL functionality overridden by vertex/fragment programs, it does seem a bit depricated. But there’s nothing that says that you cannot use these pieces of state as program uniforms. If your shader has the concept of a light, you can still pass the light position, color and a few other parameters using regular OpenGL functions.
Yes, it would be cleaner if all of that old state stuff could go away (though, to be honest, we should keep some of it for rapid prototyping. Single-texture texenv with a per-vertex color, for example). But if you want to use this other stuff, it is there for you. And if you don’t, you don’t have to.