Card reviews say “supports opengl2.0” or “2.1” or “3.0”, in tech-specs if ever GL is mentioned. Users of the 2 opengl games have to browse forums for info whether those 2 games will run on card X, if they don’t have an nVidia card. It’s a gaming-on-DX world.
It’s true that shader-models are much more meaningful, it’s down-to-the-metal. GLSL is designed in such a way that used features could be emulated/simulated i.e by unrolling loops and recompiling shaders every frame. (though it ultimately only returns errors in implementations). Over-abstraction .
Query for NV_ extensions -> so if it’s an NV card you know for which exts to look for. Otherwise, try compiling GLSL shaders (and hope you have luck). You guarantee your success with GLSL if you stick to SM1.0-like functionality on non-NV hw. Heh, might as well just go the way of ARB-asm :P. Or constantly send bug-reports to ATi on shaders that don’t compile, they’re quick at fixing things.
Or limit non-nvidia path to OpenGL3.0, and again hope for the best.
So, a forward-looking optimistic approach would be to just try-out all your shaders (compile, link, draw a pixel, fetch pixel, compare). And roll-back to a lower-quality path if any fails; rinse and repeat.
P.S: I haven’t tried running complex GLSL on ati and intel cards, just recently bought some for such testing to be done later. I’m looking at GLSL pessimistically for the problems I’ve met with non-nv cards till several months ago, and all similar reports on forums online; ultimately abandoning the idea of eye-candy on non-nV hw. An all-or-nothing situation. Also, my non-hobby GL tackling easily requires just under SM2.0 functionality, for which arb-asm is enough. I can only extend my condolences to devs that need sm3/sm4 eye-candy in their serious projects.