I’ve upgraded to leopard. From what I’ve read its using glsl 1.2.

Problem is some glsl scripts I’ve written are getting bounced into software mode. The scripts appear to compile fine but its the link and validation that reports an error.

Boils down to I cannot use a texture2D and a textureCube in the glsl script.

On my Radeon 9800, I get an error like:

Validation Failed: Sampler error:
Samplers of different types use the same texture image unit.

  • or -
    A sampler’s texture unit is out of range (greater than max allowed or negative).

vec3 expand(vec3 v) {
return (v - 0.5) * 2.0;

varying vec2 tex_coord;
varying vec3 lvec1;

uniform sampler2D texunit1;
uniform samplerCube cubemap;

void main() {

//you can have one but not both
//you can have more than one texture2D
    //but not a  textureCube

//grab the normal
vec3 norm = expand( texture2D(texunit1, tex_coord).xyz );

//normalize the light vector
vec3 _l = expand( textureCube(cubemap,lvec1).xyz );

//just return a number
gl_FragColor =  vec4(1.0,1.0,1.0,1.0);


Any suggestions? Thanks in advance.

AFAIK linking should work fine. Validation takes the current OpenGL state into account to see if the shaders can work with the current state. The validation message gives a hint to what may be wrong.
I believe you must set the texture unit id’s for both uniforms so that validation also works.

[ | vector_math (3d math library) | software renderer ]

Uniform values default to zero.
It is a validation error to have
sampler2D texunit1 = 0, samperCube cubemap =0;
because each sampler can only sampler from one type of texture, per spec.

This is working correctly, you need to set the uniforms before validating.

Hmm, shoot. Its an issue with the ( /Developer/Examples/OpenGL/Cocoa/GLSLEditorSample/ ) I don’t think it understands cubemaps. So, two texture units get assigned the same id. Sort of fun to run the scripts with this because in the render window it shows the renderer and its easy to determine if you get bumped into software mode.

I guess the real big change here on the mac is going to glsl 1.2? I’ll just have to read up on the docs. Problem is my frame rate went to about 20 to 1. If I recall leopard has new drivers. Otherwise, if I turn the my glsl scripts off then things run faster than in 10.4. ( it appears like it does )

Hmm, I’ll probably just put in some pass through script placeholders first. Then I’ll see if the frame rate becomes acceptable. Then I’ll sub in one by one my glsl scripts. If it slows down, then I’ll start to comment code out until it speeds up. My hunch is I’m either using too many variables or texture units. No, cant be texture units because Driver Monitor is saying MAX_TEXTURE_IMAGE_UNITS_ARB is 16. I figure if the pass though scripts are slow that would indicate a problem with binding – I did not go the uber script route and have many small scripts.

Hmm, I’ve homed into the real problem.

Just put pass through scripts and slowly substituted my scripts into my program. Things run fast with the pass through scripts. So, its not binding or setting uniform variables.

The problem starts right when shadow2DProj appears anywhere.

if( doshadow ) {
_coef = shadow2DProj(shadowmap, shadow_coord).r;
//sum this even more to smooth the shadow…how about 6 times?
//one is enough to make it slooow…
else {
_coef = 1.0;

If I call this at all, then performance goes down the drain on my G5/9800 card on leopard. I call it more than once to smooth out my shadows. But calling it even once apparently is a big poop. If I take it out things run fast – faster than before on 10.4 it seems. And that would be drawing the shadow map into the fbo. shadowmap is a fbo. I think its ignoring the if and doing it anyways – because if I set the uniform flag to no shadows its slow anyways.

I figure the right thing to do is to not have the flag and load in the script with or without shadows. But I would still have the problem in the shadow case.

Poop. I know the card can do it. Well, it use to do it. :slight_smile:

As usual, if you’ve found a bug, please submit it to apple at . It’s a few minutes work for you, but it could save someone else weeks of headache :wink:

I sent the mother ship a bug report.

For now, I just put in a macro SHADOWS into my script. So, when I read in my glsl scripts I compile two different versions. One version is unaltered with SHADOWS not defined. In the other version, I prepend every glsl script with #define SHADOWS.

So, when things are drawn around the truck, the scenes focal point, then I use the shadows version of the glsl scripts. If its far away, where there are no shadows because its out of my shadow maps bounds, then I use the scripts where shadows is not defined. A lot faster now on 10.4 in all cases.

If the user sets the game options to no shadows, then the program just uses all the glsl scripts where the shadows are undefined. I suppose this is what’s going to have to happen in 10.5.

So, this solves the “driver is going to execute all logic branches” problem even if it boots glsl into software mode. Can’t control what the driver can do or not. My spidy sense is that its probably calling some extension under the hood the might not be supported.

The user can select the tiered options to get what is best for the particular card.

Ah ha! Using texture unit 1 and shadow2DProj works. NVIDIA GeForce 7300 GT OpenGL Engine.

Seems to be a key nvidia feature. I guess the lesson is: when in doubt punt for the lowest numbered texture unit. Huff…

The mac os 10.5.2 update and accompanying graphics update apparently fixed the problem on my ati 9800 card. So, calling shadow2DProj seems to be fast again on my ati card. All looks well on my ati and nvidia cards.

Thanks to person behind green curtain. :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.