NVidia to depreciate CG in favour of Glslang?

After all, it doesn’t hurt ATi or 3D Labs in the slightest to not have these facilities
I don’t know details, but it was said that fragment pipeline in current 3Dlabs HW is fixed point (16 bit?). So there is slight possibity they might have benefitted too.

So it’s almost a given that they will come up with another extension allowing half in GLslang soon
It is just as much likely as unlikely, it simply remains to be seen. nVidia might just say: “Sure, you can use GLslang on our HW, but for the best performance, we recommend Cg or NV_FP”. This would be committment to GLslang in theory, but not in practice. That’s why I said situation is not 100% clear yet, and optimistic reactions are a bit premature, IMHO.