OpenGL 3.1 and Cg/Cg FX

Hello everyone!

I am building a 3D engine for a commercial title. We are targeting Win, Linux and Apple platforms. The engine will be shader-only and for now, it supports(well… kinda :P) D3D10 and OpenGL 3.1.
Because of this I was thinking to “play” with Cg Fx. [which is the only way(apart from creating your own fx system) for OpenGL to support MS-fx-like shaders].
My question is… does Cg/Cg fx supports OGL3.1? For example… do the Ogl3.1 semantics (eg InstanceID) work with Cg? Is there something that is not supported? I never touched Cg before and I have no idea.
What do you suggest for a Cross-platform(win,linux,mac) ogl3.1/d3d10 commercial 3d engine? Maybe Cg/Cg FX is not a good idea?

Thanks for your time.

sadly you cannot use CG FX if you plan to lunch you engine or any ATI radeon (or radeon HD) card :stuck_out_tongue:

The only supported profiles of cgfx on radeons are ARB vertex/fragment program 1.0 (with suxx)
and glslv/glslf (with is bugged, slow and almost unusable thru cgfx api)

The only reasonable solution is to write own FX-like api around
dx10 and glsl - writing own fx compiller is not an easy task but you will be trully platform independent.

Yep. GLSL profiles were quite unusable in Cg a year ago. I am not sure about the current state of Cg and I don’t care. At that time, I decided to use Cg compiler only for translating Cg to GLSL, dropped CgGL, and implemented the GLSL codepath myself (and ARBVP1/FP1 profiles too!). This way, you would have a 100% control of the code. An experienced programmer would further improve it to support bindable_uniform or uniform_buffer_object (additional preprocessing of the GLSL code required).

… I believe Cg profile gp4gp supports the INSTANCEID semantic (see the Cg manual for a complete list of all available profiles and their semantics) …

yes instanceid semantic is SUPPORTED but gp4 profile is supported ONLY on nvidia cards - so forget about cg :slight_smile:

i have a strange feeling on how to rely on Cg at all. if you care about running your apps on many cards i would choose GLSL for OpenGL. i had the same question half year ago and decided to write own OpenGL FX file format with techniques and fallbacks which turned out to be not that difficult as expected. Now it runs on any card i tried (except Intel of course!) falling back on last technique in the hierarchy if needed.

p.s. im a big fan of FX files from Direct3D 9 times.

Funny thing is that if you code up your own flavor you’ll probably end up with something very close to Cg.

I’ve been a proponent of the roll your own course but have recently become lazy beyond all reckoning. I’ve personally found that ultra high level material/effect descriptions that are free of the mixed nuts and bolts of language specifics are easily converted to language flavor of choice and absolve the author of low level hardware concerns. The idea being to describe what to draw rather than how to draw it. The cool thing about an abstraction like this is that it scales with technology, looks better in the future as a future renderer is able to reinterpret the description through the lens of the latest tech. Another way to distance/insulate yourself from the platform/shader quagmire, where content is concerned at any rate.

agree. own flavor with GLSL support on any card whatever GLSL version is supported. was the perfect solution for ATI cards. because i had also a basic Direct3D renderer with FX files i was pulling my hear out trying to figure out how to make it as easy as possible to support shaders (FX files) in both renderers with minimal effort. CgFX used to many glGet calls at that time to apply renderstates and the like, the performance was beyond horrible. after all the native feature is the best since Cg is NVIDIA brand.