nVidia and MS worked together on creating DX8 with its vertex and pixel “shader” architecture. This is new functionality that GL lacks. A GL extension for vertex programs was added by nVidia, and I expect that when NV20 comes out they will add an extension to access its per-pixel abilities (dependent texture reads and the like). This is, of course, good. We wouldn’t want new hardware to come out without us being able to access it’s features through GL.
In the long term, though, this kind of programmable pipeline will become more and more prevalent. I believe that eventually this kind of functionality will have to be folded into the official OpenGL specification. Either that or this is the time for an official break of a games-specific flavor of GL from the existing system. At this point taking advantage of vertex arrays, multitexture, complex blend modes and shading operations, and programmable per-vertex math leaves one writing code consisting only of extensions, it seems. This functionality should either be brought into GL proper, or should be spun off into a seperately evolving spec that consumer games cards would implement (a “gaming” subset, similar to the “imaging” subset).
Is the DX8 API for a programmable pipeline (and the corresponding shader language they have chosen) the “right” choice? What should happen in the next release/s of OpenGL to adapt it to the reality of T&L hardware and programmable GPUs?