First of all, the OpenGL vs D3D has no place in a discussion of Cg.
As far as Cg is concerned… in many ways it is the right way to go if nVidia pulls it off correctly. OpenGL 2.0 has one fatal flaw: it is utterly useless. Not only does it not exist yet, but when it does no hardware will support it. It will be 2 to 3 years before we see GL 2.0 in hardware.
I have a card in my computer that fully supports Cg at this very moment. When nVidia releases the GeForce 5, (once again, if nVidia does it right), all of my compiled Cg code will work just fine on it. I’ll download a new Cg compiler with new expanded functionality, but all the old Cg shaders will still compile. When the hardware is avaliable for it, Cg will provide GL 2.0 capabilities.
As long as the Cg compiler is fully backwards compatible (and provides the ability to compile somewhat complex shaders for older hardware to the extent that it is possible), Cg is a better solution than GL 2.0. Ultimately, the problem with GL 2.0 is that, while it is a nice future, it is a useless present and will not be very usable for the near future. Cg is here and mildly useful now; it will be here when the GL 2.0 shader is around, and it will still be useful.
As to the argument that nVidia is using Cg as a power-grab to reclaim the market… of course they are. They are caught between 2 organizations beyond their control: Microsoft and the ARB.
They can’t control what GL 2.0 is simply because everyone on the ARB competes with them. They want to bring the market leader down, so they will do everything in their power to make GL 2.0 as difficult as possible for nVidia to use.
At the same time, Microsoft benifits by having multiple graphics card vendors in a good position, which is why the 1.4 Pixel Shader was, basically, written by ATi (to offset the fact that the 1.1 PS was written by nVidia). D3D 9’s shaders don’t provide any side in the market with an advantage.
Trapped between GL 2.0 being out of their control and D3D 9 not providing them the advantage D3D 8 did, they have one option: make their own language. In a way, Cg is a lot like D3D’s shaders, only with nVidia in charge of the language. Also, it gives us, as users of the language, growing room, which is not what OpenGL 2.0’s shaders are designed for.
Once you provide conditional branching and looping constructs at both the vertex and pixel levels (and they could use the exact same interface), really, there’s nothing Cg couldn’t do that isn’t already part of the language.
Is this a blatently monopolistic move? Sure, since it is highly likely that none of their compeditors will be writing a Cg version. At the same time, there are worse companies who could monopolize the graphics card market. That’s one of the reasons I don’t mind that Microsoft has its monopolies: as long as they keep producing products I like to use and that are productive, I will continue to use them. And as long as nVidia continues to produce products that are of a high quality, I am willing to overlook their blatent power-grab.
As long as Cg is backwards compatible, and the language itself doesn’t change too much (as I said, the only additions are adding looping and conditional branching syntax), it should be a better alternative than the vaporware that is GL 2.0.