cg & display lists

I was wondering, if I make a display list and call something like when I’m creating the display list:

CGparameter param = cgGetNamedParameter(prog,name.c_str());

Will it be saved into the display list? I figure the cg library is going to call opengl commands that should be captured?

This relates to a problem I’m having. That is, I read in a .obj file. Every group in the file I make a display list. But within each group there can be multiple materials/colors from the mtl file. So, as I generate a triangle strip for instance, the colors might change. Unfortunately, when I run the cg fragment program it tends to take into account the first and second material but if I have more it just runs that material over the rest of the triangles. I get the material info from glstate.material.diffuse , for instance. I assume the glstate was set by glMaterialfv when I created my list.

When I run the exact same display lists in plain opengl ( without cg ) then I get the proper colors generated glMaterialfv.

I’m wondering if I should call cgGLSetMatrixParameterfr explicitly and set a variable in the cg program rather than using glstate?


I’m using cg 1.5. B is for beta. Sure hope its setting state right…

Hi nib,

The Cg GL runtime doesn’t interact particularly well with display lists, because of the way that it tries to touch OpenGL state. Basically when you set a parameter, it queries the currently bound program, binds the one associated with your parameter, updates your parameter, then binds back the program that was bound before.

So if you make a display list of the call, you’ll get the setting, but it’ll always set it back to the program that was bound when you created the display list.

This is ugly, and we’re expecting new parameter setting APIs being developed for the GL 3.0 effort to simplify things considerably, and make them efficient and work with display lists.

In the mean time, I’d avoid building display lists out of Cg runtime generated OpenGL calls.

Thanks -

Thanks for the info.

I’ll avoid cg calls inside the display list. It appears to work if I set the shader parameters then call the purely standard opengl display lists separately. The caveat was the glstate issue.

I was reading the cg spec this weekend. I found that glstate is not supported in fragment programs. It must have been a fluke that I even was able to use glstate. Maybe I was reading values leftover in the registers. So, now I pass in the material info from the vertex program to the fragment program via COLOR0, COLOR1 and texcoords. Evidently, glstate in a vertex program is valid. Using this method, I appear to be able to change material values in a display list all the way down into my fragment program.

I’ve seen some cg examples out there where people just encode the material info via glMultiTexCoord. I use glMultiTexCoord to pass in tbn matrix info also. Though, things get tight because arbvp1 profile only has 8 texcoord registers.

nib, maybe the Cg GLSL profile support will help to get around the 8 texcoords for non nvidia cards.

is there any chance a “manual” mode for Cg runtime could be supported, so that the “query, bind, update, bind” calls are not done, just the “update” one ?
because when the engine is arleady built in such way that parameters are updated when the corresponding program is bound, those calls are pretty redundant.
oh and is there something like an official Cg forum? there used to be but it is down for quite long.

I’ve maxed out my registers at 32 in arbfp1/arbvp1! This really limits my dynamic lighting. Also, any looping – be as limited as possible – cannot have a variable count. Eeeck…

Ya, I notice that cgc can cross compile to glsl. Thus, I could move forward. eg.

cgc -profile glslv

But I’m not sure if cg can compile the glsl into assembly. I’d have to use another tool for that. Though, I think cg can read in an assembly program with cg 1.5.(?) Or can one just load this right back into cg where it would compile it and do its thing? Apple’s opengl shader builder does not seem to like the cgc glslv output.

My program is simple. It has three modes. Mode 1: I use plain opengl with no extensions – I use displays lists. Its intention is to make everyone happy because there are no extensions or fancy stuff.

Mode 2: I bind to a single vertex and fragment program around what I did in mode 1. Actually its a stack. The first instance applies to everything. Then if the user wants to customize they can push another shader on the stack – this would do a query/bind/update/bind or turn it all off in a particular instance. Though, this is the exception. Since I only use one fragment/vertex program there are hardly and bindings going on. I tend to just call cgGLSetMatrixParameter.

Mode 3: I add shadow mapping and a depth mapped fbo. I do the shadow calculation in the fragment program. This is pretty slow. Maybe mac os 10.5 will help. :slight_smile:

I suppose a problem with having one shader and vertex program is I have to branch for different program options. Branching = bad. But I figure I can look at all the game options first, then load an appropriate program that has these options evaluated and get rid of the branching. eg. user selects shadows or no shadows…

Sometimes I’ll read .

Sometimes I’ll read
so i take it this is prolly the best place to ask cg questions?
cheers ill search + if not found, post my cg question there in future

But I’m not sure if cg can compile the glsl into assembly.
That’s not the point of compiling to glslang. Think of a glslang compile as a compile to assembly, only in a different language. It’s still a “final” language, in that you compile it to something that you give directly to the driver.

wasnt aware you are on mac, then the glsl thing wont help you indeeed, it was meant to be a workaround for certain limits on the arb asm extensions, which arent updated anymore, e.g ATI , so to get the most out of their latest hardware you have to use GLSL (hence I was pretty happy about 1.5 offering that possibility).

it would be really good if shadertech forum was online again, I wonder why nvidia pulled the plug and have no official forum for their own software, instead one has to hijack others :wink:

Hmm, just messing around with this cg/glsl ism.

So, I got rid of all the glstate from my programs. Thus, it should work with other more advanced profiles. Anyhow, I compile them with the “-profile glslv” turned on. For kicks, I took the output and put it into the Mac Open GL Shader builder. It says “OK”. Looks like a valid glsl vert and frag program. I get valid “ARBfragmentshader” assembly in the log.

I assume if I set the profile in my c++ code to something like “CG_PROFILE_GLSLV” or “CG_PROFILE_GLSLF” its going to on the fly build into glsl? Anyhow, if I then use “cgCreateProgramFromFile” with the previously mentioned profiles, then I get “CG_NO_ERROR”.

OK, then “cgGLLoadProgram” will not load the profile – invalid. OpenGL Profiler shows no program loaded. Huff…

Hi nib,

Have you looked at the GLSL examples in the latest toolkit? There are some caveats for using GLSL shaders (for example, you have to combine the shaders so that the Cg runtime can make a ProgramObject).

I really wish the GLSL model did not require creating a separate program object for every tuple of shaders. As the number of programmable stages increases, this becomes unweildy.

If you continue to have problems, please feel free to email me at

Thanks -

I’ll have to look. I installed what I think is the latest version of cg. Though, this is a mac dual g5. The mac installer says its 1.5.0b3-14-Tiger. This appears to be the latest installer – the site says 1.5 released but it appears to be a beta installer. Also, there is an examples.tgz link that does not work.

I’ll try to download the windows installer. Have’nt run the old p4 for a month. It’ll need some microsoft update patches. :wink:

I looked in the /Developer/NVidia folder and I did not see any glsl examples. I see other examples like cgFX. The framework appears to be installed in /Library/Frameworks which is fine. I was pretty careful cleaning out the previous install.

Curious to see how the glsl would run on my radeon 9800 via cg.