Switching GLSL vertex shaders -- REALLY slow?

I have a case with four separate GLSL vertex shaders, where 1/4 of the scene is drawn with each of the shaders. This scene, running at a fixed 60 Hz, takes 100% of the CPU! Meanwhile, if I set the vertex shader once, and just draw the same scene (just not switching shaders), the same scene takes 5% of the CPU.

I can understand that the first bind of a specific shader would cause compilation and validation, but this CPU usage is each and every frame; not just the first time the shader is bound.

What can I do to prevent this problem? Spending all that CPU on just switching shaders is not an option for us; we’ll have to go back to fixed-function processing.

You mean this

for (i = 0; i < 4; i++)
{
glUseProgramObject(program[i]);
Render(scenepart[i])
}

versus this

glUseProgramObject(program[n]);
for (i = 0; i < 4; i++)
{
Render(scenepart[i])
}

Did you test all four shaders for n above?
Are they equally fast?
Is one falling back to SW (well, at 60 Hz that’d be surprising).

What hardware and drivers?

Yes, that’s what I mean. Yes, I tested all four shaders; no difference.

The surprising thing is that this happens both on NVIDIA hardware and ATI hardware, Anything from a GF4200Ti, through a Radeon 9600, to a GF6800; all using modern drivers.

The fact that both hardware lines see the same problem lead me to believe there’s some systemic reason for this CPU usage, but I just can’t figure out what it would be, or how to work around it.

I don’t know what’s causing the problem you’re having, but I have more than a dozen shaders (pairs of fragment/vertex shaders), and I switch between them many times during one frame (I never got around to sort by shaders), but my CPU is still below 15%…

Thanks; that’s a good data point.

Are you absolutely sure you are only calling glUseProgramObject 4 times per frame? Shader swapping is a performance killer…

I have certainly done many shader changes per-frame on a variety of hardware (both ATI and nVidia) and never encountered the aforementioned scenario. However, i have never used a lone vertex or fragment shader. You can try attaching a fragment shader (that simply outputs 1, 1, 1, 1 in gl_FragColor), with each of the vertex shaders, simply for testing, and see whether the problem persists or not.
But then again, you might have tried that already, but i am guessing anyways.

The surprising thing is that this happens both on NVIDIA hardware and ATI hardware, Anything from a GF4200Ti, through a Radeon 9600, to a GF6800; all using modern drivers.
gf4200 is not a gf4 i take it as they dont support glsl

sorry ive no answers but can also tesstify to using great numbers of shaders ave >30 changes a frame (actually im forgetting lighting theres prolly greater than100 changes a frame) + im getting no great cpu hit.

one thing u can try (its stopid but i get occasional error if i dont do the following) but after youve set the program reset it with the following
glGetIntegerv( GL_CURRENT_PROGRAM, &active_programID );
glUseProgram( GLuint(active_programID) );

also one thing i just noticed u mention just vertex shaders (i always think of them as unified ie a shader is vs+fs combined)
im a lazy ass so each fs gets its own vs (prolly a lot of duplication with vs’s) perhaps its something to do with this

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.