I think this is a backwards looking approach
God forbid that we might consider a solution that, while planning for the future, does not destroy the present. If OpenGL 2.0 is only useful for R300 or better cards, then nobody is going to switch to it for at least 3 years. Even today, you don’t see a lot of developers, even D3D developers, making significant use of DX8.0 programmability, and they have a standard language.
Above all, OpenGL 2.0 should be reasonably implementable in current hardware.
This gives exactly the mess of different vertex program / fragment program versions we already have.
You consider it a mess. I don’t. The actual mess is that these shaders are interfaced in several different ways. If we had a standard method of loading up a shader program, but with vendor-specific processing of that program, that would be relatively OK. At most, it requires writing shaders in a few languages, and doing a quick if-then test or two at bind-time (or, if you’re smart, at load time). Hardly a significant issue. No, the real problem comes in when EXT_vertex_shader’s interface looks so drastically different from NV_vertex_program.
because it is clear that hardware will support a full-programmable vertex/fragment unit anyway in near future
The “near future”? Precisely when is this? A year from now? Two years? What do you consider “full-programmable” anyway?
In any case, my proposal does not ignore the future. It provides for it, by having the ARB supply various extensions to the language that would be considered “standard” for GL 2.0 functionality. No one is forced to use them, and a program can easily test to see what functionality exists. When most of the hardware that is avaliable supports all of the ARB extensions, then GL 2.1 can require these extensions by making them part of the spec.
The only difference between this and the current GL 2.0 is that it provides for a reasonable ammount of backwards compatibility.