How to get the shader model that GLSL supports? Is it the same as HLSL model specification? What’s the equivalent?
I ask because I see this becoming a standard to describe a card’s capabilities, and I want to make sure it’s not measured by a “Non”-standard API (Direct3D).
I think (not sure):
Something that should work on all cards: try to compile, link and use (draw one pixel) with a representative GLSL shader.
Otherwise, hope extension-strings or wglGetProcAddr are enough.
Card reviews say “supports opengl2.0” or “2.1” or “3.0”, in tech-specs if ever GL is mentioned. Users of the 2 opengl games have to browse forums for info whether those 2 games will run on card X, if they don’t have an nVidia card. It’s a gaming-on-DX world.
It’s true that shader-models are much more meaningful, it’s down-to-the-metal. GLSL is designed in such a way that used features could be emulated/simulated i.e by unrolling loops and recompiling shaders every frame. (though it ultimately only returns errors in implementations). Over-abstraction .
Query for NV_ extensions -> so if it’s an NV card you know for which exts to look for. Otherwise, try compiling GLSL shaders (and hope you have luck). You guarantee your success with GLSL if you stick to SM1.0-like functionality on non-NV hw. Heh, might as well just go the way of ARB-asm :P. Or constantly send bug-reports to ATi on shaders that don’t compile, they’re quick at fixing things.
Or limit non-nvidia path to OpenGL3.0, and again hope for the best.
So, a forward-looking optimistic approach would be to just try-out all your shaders (compile, link, draw a pixel, fetch pixel, compare). And roll-back to a lower-quality path if any fails; rinse and repeat.
P.S: I haven’t tried running complex GLSL on ati and intel cards, just recently bought some for such testing to be done later. I’m looking at GLSL pessimistically for the problems I’ve met with non-nv cards till several months ago, and all similar reports on forums online; ultimately abandoning the idea of eye-candy on non-nV hw. An all-or-nothing situation. Also, my non-hobby GL tackling easily requires just under SM2.0 functionality, for which arb-asm is enough. I can only extend my condolences to devs that need sm3/sm4 eye-candy in their serious projects.
Standards are good. If Microsoft - or any other interest with sufficient influence - in concert with the ISVs/IHVs are able to assign a meaningful, marketable number to each generation of hardware (and shading language) it’s a good thing for everyone in the business to make money.
OpenGL has to float somewhere above a convention like this since by design and definition the API is not tied to a specific generation of hardware. Owing to the high level of its abstraction its mandate is loftier and less specific about the details of the implementation - that’s a trade off.
I don’t necessarily agree with this. OpenGL runs on real chips, and if the real chips have common feature sets or limits across vendors, (whether due to coincidence or a “steering hand”), there’s no overriding reason why GL shouldn’t make that information more clearly available to the programmer.
Mr. Kilgard hit on the topic of “Direct3Disms” in his Siggraph Asia presentation which addressed some of the larger issue of “standardization” in a way that’s meaningful to folks porting from D3D. Making OpenGL a relatively painless, short-term transition for the majority of desktop game makers, for example, would not only pave the way to some semblance of standardization among the APIs but may also open the door to markets hitherto untapped due to budgetary constraints within increasingly long and costly development cycles. As I see it, anything that’s good for PC gaming in general is potentially good for OpenGL in particular.
P.S. One “OpenGLism” I’d like to see in D3D is a symmetric clip space cube