Your latter statement suggests GLSL. With Cg, it at least used to be that you effectively drop back to ARB_vertex_program/ARB_fragment_program (arbvp1 and arbfp1 in Cg profile lingo) functionality on ATI as NVidia (as the Cg author/maintainer) doesn’t support any newer shader extensions on ATI hardware (AFAIK).
HOWEVER, the one thing I’m not sure about (added since I last worked thoroughly with Cg) is Cg cross-compile to GLSL. I know Cg can now cross-compile to GLSL – I’ve used this from the command-line to help debug/optimize GLSL shaders. But I’m unsure as to whether you can tell the Cg API to transparently use GLSL as a back-end profile – that is, as the way it feeds shaders to the GPU and drives it – or not. It looks like you can though! The fact that there are specific profile macros in the Cg header files for GLSL (glslv, glslf, glslg, glslc) suggests you can.
In which case (if this is really supported), you could code your shaders in Cg, use the Cg run-time to make things easier, and then tell Cg to cross-compile to GLSL behind the scenes and drive the GPU using GLSL shaders, and get access to most (if not all) of the latest GPU features. That is, not be limited to arbvp1/arbfp1-level functionality.
However for getting started, I’d suggest learning shaders in GLSL. Less software in between you and the GPU, and GLSL and Cg are very similar until you get into advanced features. Just a few syntactic quirks and a few built-in functions have different names (mix vs. lerp, etc.). Right now you just need to latch the concept.
Then once you’re up on that, you’ll know enough to make the call as to whether you’d benefit from using Cg or not.