Detecting a software fallback

hello,

i’m currently involved in developing a game that uses alot of glsl shaders.
unfortunately glsl hardware implementations seems to vary from company to company, so i’m trying to figure how to detect when a software fallback happend.
typhoonlabs shader designer is able to detect this as it outputs debug strings like:

 Link successful. The GLSL vertex shader will run in software due to the GLSL fragment shader running in software. The GLSL fragment shader will run in software - available number of ALU instructions exceeded.

i’ve tried getting info from the info-log using the following code, but even if the renderer fallsback to software-mode the log-length is ‘1’ (i.e. no error )

static char    *PrintInfoLog(GLhandleARB object)
{
	static char     msg[4096];
	int             maxLength = 0;

	glGetObjectParameterivARB(object, GL_OBJECT_INFO_LOG_LENGTH_ARB, &maxLength);

	if(maxLength >= (int)sizeof(msg)) 
	{
		Error( "PrintInfoLog: max length >= sizeof(msg)");
		return NULL;
	}

	glGetInfoLogARB(object, maxLength, &maxLength, msg);

	return msg;
}

how do you guys detect a software fallback?

In theory your example is correct and the log should tell you whether your shader cannot be run in hardware.

In practice there can be situations where the shader can be run in hardware (so the log reports success) but there is a driver problem that causes it to run in software due to an incompatible state setting.

For example (on my ATI card) having line smoothing enabled whilst running any shader would cause it to drop into software rendering without any error report.

Edit:

I do my linking slightly different, in that I dont request the log size, maybe try it this way

	int  success;
	char log[8192];
	glLinkProgramARB(shader.program);
	glGetObjectParameterivARB(shader.program, GL_OBJECT_LINK_STATUS_ARB, &success);
	if(!success)
	{
		glGetInfoLogARB(shader.program, sizeof(log), NULL, log);
		printf("%s
", log);
	}
 

Originally posted by killerseven:
[b]In theory your example is correct and the log should tell you whether your shader cannot be run in hardware.

In practice there can be situations where the shader can be run in hardware (so the log reports success) but there is a driver problem that causes it to run in software due to an incompatible state setting.

For example (on my ATI card) having line smoothing enabled whilst running any shader would cause it to drop into software rendering without any error report.

Edit:

I do my linking slightly different, in that I dont request the log size, maybe try it this way

	int  success;
	char log[8192];
	glLinkProgramARB(shader.program);
	glGetObjectParameterivARB(shader.program, GL_OBJECT_LINK_STATUS_ARB, &success);
	if(!success)
	{
		glGetInfoLogARB(shader.program, sizeof(log), NULL, log);
		printf("%s
", log);
	}
 

[/b]
I know about the ATI problems with point/line smooth, but that’s not a problem since I’m not dealing with these options.
The thing is I do test my shaders in Shader Designer and I get the output “cannot be run in HW” but the same shader ingame just falls back to SW without any output. I would really like to know how ShaderDesign is handling the whole affair. But thanks for your hint, I’ll try it out now.

edit: I’ve just noticed your code is for the link/validate status. I’ve tried getting this information of the info-log. Doing the check on link/validate is the same for me:

static void LinkProgram(GLhandleARB program)
{
	GLint           linked;

	glLinkProgramARB(program);

	glGetObjectParameterivARB(program, GL_OBJECT_LINK_STATUS_ARB, &linked);
	if(!linked)
	{
		Error("%s
shaders failed to link", PrintInfoLog(program));
	}
}

static void ValidateProgram(GLhandleARB program)
{

	GLint           validated;

	glValidateProgramARB(program);

	glGetObjectParameterivARB(program, GL_OBJECT_VALIDATE_STATUS_ARB, &validated);
	if(!validated)
	{
		Error( "%s
shaders failed to validate", PrintInfoLog(program));
	}
}

Here is the logic behind the shader loading code:
-CreateProgramObject
-load vertex shader
-load fragment shader
-bind attribs
-link program
-validate program

how do you guys detect a software fallback?
Just render something and measure frame time. Obviously, if you get <10 FPS, it is too slow.
You measure this at the beginning, not during game play because some resource hog program may cause a sudden drop in FPS in the middle of gameplay.

That ATI issue is not something to worry about for games.

Originally posted by V-man:
[b] [quote]how do you guys detect a software fallback?
Just render something and measure frame time. Obviously, if you get <10 FPS, it is too slow.
You measure this at the beginning, not during game play because some resource hog program may cause a sudden drop in FPS in the middle of gameplay.

That ATI issue is not something to worry about for games.[/b][/QUOTE]hmm, that wouldnt be a proper solution since the mainmenu is drawed with a proper fps but it’s totally corrupted (drawing quads without texture for example )

there must be a way to detect a SW fallback since ati rendermonkey and shaderdesigner are capable of doing it.

this seemslike a hot topic where nobody knows the proper solution

hmm, that wouldnt be a proper solution since the mainmenu is drawed with a proper fps but it’s totally corrupted (drawing quads without texture for example )
If it’s corrupted, then you have jam on your GPU.

there must be a way to detect a SW fallback since ati rendermonkey and shaderdesigner are capable of doing it.
I just tried with Rendermonkey right now.
POLYGON_SMOOTH doesn’t effect performance or anything.

POINT_SMOOTH causes some corruption and flicker. Perhaps this is what you saw as well.
Disabling causes texturing to be losed and some other corruption pattern appears.

LINE_SMOOTH throws it into software rendering. FPS = 0.0
RM is not able to detect it runs in software mode.

I’m guessing RM only searches for the word “software” in the info log.

Originally posted by V-man:
[b] [quote]hmm, that wouldnt be a proper solution since the mainmenu is drawed with a proper fps but it’s totally corrupted (drawing quads without texture for example )
If it’s corrupted, then you have jam on your GPU.

there must be a way to detect a SW fallback since ati rendermonkey and shaderdesigner are capable of doing it.
I just tried with Rendermonkey right now.
POLYGON_SMOOTH doesn’t effect performance or anything.

POINT_SMOOTH causes some corruption and flicker. Perhaps this is what you saw as well.
Disabling causes texturing to be losed and some other corruption pattern appears.

LINE_SMOOTH throws it into software rendering. FPS = 0.0
RM is not able to detect it runs in software mode.

I’m guessing RM only searches for the word “software” in the info log.[/b][/QUOTE]well, maybe i didnt explain myself properly here. i’m not using LINE_, POLYGON_ or POINT_SMOOTH.
for example if i did too many texture indications in a shader then gl shaderdesigner falls back to software and outputs the string i’ve posted earlier (maybe because it strstr’d the infolog buffer for “software”)
the weird thing is, the same shader in “nsco.gold”, with too many texture indications for my x700 gpu won’t produce any infolog, it’ll just fallback to software (i.e. the ugly rendering)
if there are only 8 texture indications in the shader the menu is rendered correctly and there is no software fallback - so i doubt there is “jam” on the gpu :slight_smile:
but my infolog code works, because a typo or brokencode will be detected by the card and generate a infolog.

If I understand you correctly, it’s not an issue with detecting software rendering. Your info log is bogus.

You likely have errors in your code.
Wild guesses…
you aren’t trapping compilation errors for the vertex and fragment shader.
You haven’t linked.
There is something bizarre with the string you give to compile. Try GLintercept

Ehh… you only write out the log if linking fails and if it’s running in software it obviously didn’t fail.
To test for software mode you should always scan the linking info log for “run in software”. Beware that the info log format is pretty informal and this would only work on (current) ati drivers - there is no official way to do it.

Originally posted by PsychoLns:
Ehh… you only write out the log if linking fails and if it’s running in software it obviously didn’t fail.
To test for software mode you should always scan the linking info log for “run in software”. Beware that the info log format is pretty informal and this would only work on (current) ati drivers - there is no official way to do it.

yes! thank you, that was exactly what i was doing wrong.
what do the nvidia drivers output when they fallback to software?

They don’t. If your program linked successfully it will be run by the hardware.

Not true, it runs in software with nvidia - at least, running in software is the only explaination for 1spf on nv30 and 60hz on nv40.

and they also output the same infolog as ati cards, atleast the software fallback detections works on a geforce 6600 according to a tester of nsco.gold.

>>Not true, it runs in software with nvidia - at least, running in software is the only explaination for 1spf on nv30 and 60hz on nv40. <<

You make false assumptions. There can be other reasons for a SW fallback, for example using OpenGL 2.0’s non-power-of-two GL_TEXTURE_2D targets on NV3x.
This is not related to a successful GLSL shader link step and can not be reported in the infolog.

>>and they also output the same infolog as ati cards, atleast the software fallback detections works on a geforce 6600 according to a tester of nsco.gold. <<

Ehm, solid hear-say facts? :wink: I’d be curious to see a shader which proves that.

Relic, the GLSL spec explicitly does not allow a program to fail to link because of program length. If your program exceeds the implementation’s preferred limits, that implementation must find some other way to make it work (virtualization, software fallback, etc).

NV cards to have ALU instruction limits, and you can write a program that exceeds them.

Unfortunately, the general resolution in this thread is correct: there is not any portable way to determine that software fallback has taken place. Others have already suggested that ATI’s Info Log will contain the word ‘software’, which probably isn’t reliable, but is better than nothing.

Interestingly enough, Apple does have a supported solution for this, via the CGL API:

CGLGetParameter (ctx, kCGLCPGPUVertexProcessing, &GPUvert);
CGLGetParameter (ctx, kCGLCPGPUFragmentProcessing, &GPUfrag);

Your best bet would probably be to push for an OpenGL extension, which adds something like ARB_vp’s PROGRAM_UNDER_NATIVE_LIMITS query.

>>the GLSL spec explicitly does not allow a program to fail to link because of program length<<

Which paragraph has this explicit statement in the GLSL spec?

Originally posted by Relic:
>>the GLSL spec explicitly does not allow a program to fail to link because of program length<<
Which paragraph has this explicit statement in the GLSL spec?

The closest thing to that is paragraph cited bellow altrough it uses should instead of shall.

Add Subsection 2.14.5 Resource Limits
A shader should not fail to compile and a program object to link due to lack of instruction space or lack of temporary variables. Implementations should ensure that all valid shaders and program objects could be successfully compiled, linked and executed.

Corresponding issue discussion:

  1. Are the limits on all resources an executable uses queriable and known to the application?

DISCUSSION: Various proposals have been discussed. One very important consideration is to end up with a specification that provides application portability (e.g., ISVs do not need to support multiple rendering back ends in order to run on all the different flavors of hardware). ISVs definitely would prefer the specification to say that the OpenGL Shading Language implementation is responsible for ensuring that all valid shaders must run.

RESOLUTION: Resources that are easy to count (number of uniforms available to a vertex shader, number of uniforms available to a fragment shader, number of vertex attributes, number of varyings, number of texture units) will have queriable limits. The application is responsible for working within these externally visible limits. The OpenGL Shading Language implementation is responsible for virtualizing resources that are not easy to count (for example, the number of machine instructions in the final executable, number of temporary registers used in the final executable, etc.). The expectation is that for any practical application an executable (generated by the link stage) will run.

Frogblast
Your best bet would probably be to push for an OpenGL extension, which adds something like ARB_vp’s PROGRAM_UNDER_NATIVE_LIMITS query.
That’s not the best because even if you are under native limits, some GL state might cause software rendering like it happens on ATI.

The best is to have a validator function.
When you call it, it tells you it runs in software or hw. It would be nice to know why it runs in software.

The current validator function just checks if it can run or not.

Originally posted by Relic:
You make false assumptions. There can be other reasons for a SW fallback, for example using OpenGL 2.0’s non-power-of-two GL_TEXTURE_2D targets on NV3x.
Your arrogance is impressive…to a 4 year old. Relic, you ain’t, and never will be Korval.
The only difference in my application between NV3x running at 60hz and NV3x running at 1spf is an increased shader length. Incidentally, I never use non-pow2 textures.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.