Disclaimer: Note that I haven’t worked at all on Cg. Also keep in mind that the below are all personal opinions, not necessarily reflecting the views of my employer. Indeed, I’m actually on vacation from work right now, after my recent college graduation…
Eric, your post reminds me of an interesting point – the “ASM” vs. “HLL” issue, as it relates to the API. The way I see it, what makes the most sense from an API standpoint is to expose an assembly-level language from the graphics API (recall that OpenGL is supposed to be a “low-level” graphics API), and to layer a HLL compiler on top rather than integrating it into OpenGL. I think it makes very little sense to make a shading language part of the API itself.
This is the approach that we have taken with NV_vertex_program and Cg, and I think it’s the right design decision. You can precompile shaders; you can examine what the assembly looks like; you can have an API-independent runtime layer that works with more than just OpenGL; and, of course, you can still write assembly programs when you need to!
This is a personal beef of mine with the OGL2.0 proposals – I really don’t think it makes any sense to put a HLL inside the OpenGL API.
OGL1.4 is taking the right approach by exposing an assembly language from the API. The assembly language can be upgraded later, but it will certainly be functional in its initial form and a viable compiler target platform. So, a sufficiently inclined individual or company could write a compiler from the proposed 3Dlabs shading language to ARB_vertex_program, right now, today. The only reason to wait and make it part of “OGL2.0” is for marketing reasons.
If you wanted to be really picky, you could have a shading language as part of a “GLU 2.0”. This layer would simply call glLoadProgramNV (or the ARB_vertex_program equivalent thereof). This analogy isn’t entirely accurate because GLU generally doesn’t have a driver layer that allows different vendors to plug in their own implementation. However, this analogy does make it clear how a separate layer can work, and also illustrates how “standardization” is a straw man for putting the shading language in the base API. It’s perfectly possible to standardize a shading language and even a shading runtime that sits at any layer.
A final argument that has been made is that it is valuable somehow to not support an assembly language, because it eliminates some sort of backwards compatibility burden. But since ARB_vertex_program isn’t going away (much less NV_vertex_program or DX8), this is a burden that will already exist by developer demand. In the very worst case, you could “compile” an ARB_vertex_program into a high-level program. This assumes that the HLL has the same set of program inputs (i.e. vertex attribs) and outputs as the ARB language; but I think that’s a reasonable assumption. There’s no reason to change the semantics of input/output behavior just because you are putting in a high-level language.
Oh, what I’d give to be able to have some honest discussion of the 3Dlabs OGL 2.0 proposals… just look at the poll on this site about whether you’ve reviewed the proposals. If you’ve reviewed them, you have the choice of either “fully support[ing]” them or “want[ing] to learn more”. There is no option that lets you say that you disagree with many of the design decisions, as I do.
But I’ve already probably spoken too much about this sensitive topic…
[Then again, isn’t that precisely the problem? Those of us who’ve worked on drivers for years, who live and breathe OpenGL, who may have many criticisms and disagreements have our tongues tied for political reasons, while developers just look at the proposals and see that there’s all this stuff in them, and wouldn’t it be nice to just have every feature in the world… I see it as a set of tradeoffs and design decisions, and see what I think are the wrong ones being made, and I can’t even tell anyone what I’d like to see changed, even if they might agree with me.]
Okay, now I should really shut up.
- Matt