Improved error reporting

Hi all,

I discovered a neat little trick which I want to share.

Most of you probably load shaders from a file into a string, and pass it to glShaderSource unchanged.

This works great, but unfortunately the file name is not included in the error messages:


0(7) : error C1008: undefined variable "gl_position"

When working on a program with multiple shaders, this can be a pain.

The trick is to insert a line preprocessing directive such as the one below while loading a shader.


#line 1 "test.vert"

This improves reporting of errors:


test.vert(7) : error C1008: undefined variable "gl_position"

A routine that does just that is provided below. I am not sure it will work with any GLSL compiler, but it does with the one I am using.

Enjoy!


void shader_source_from_file(GLuint shader, const char* filename)
{
  FILE* fp = std::fopen(filename, "rb");
  assert(fp != NULL);
  std::fseek(fp, 0, SEEK_END);
  std::size_t shader_source_size = std::ftell(fp);
  std::rewind(fp);
  GLchar* shader_source = new GLchar[shader_source_size + 1];
  shader_source[shader_source_size] = 0;
  std::fread(shader_source, 1, shader_source_size, fp);
  std::fclose(fp);
  char line_directive[80];
  std::snprintf(line_directive, sizeof(line_directive), "#line 1 \"%s\"", filename);
  const GLchar* string[2] = {line_directive, shader_source};
  glShaderSource(shader, 2, string, NULL);
  delete [] shader_source;
}

Nice trick! Thanks for sharing.

I am not sure it will work with any GLSL compiler, but it does with the one I am using.

Which compiler is that?

GL_VENDOR: NVIDIA Corporation
GL_RENDERER: Quadro FX 5800/PCI/SSE2
GL_VERSION: 3.2.0 NVIDIA 190.53
GL_SHADING_LANGUAGE_VERSION: 1.50 NVIDIA via Cg compiler

According to GLSL 1.5 spec the “#line” accepts two integers and not a string like C/C++. So this wont work on AMD GPUs, unfortunately.

I know. This should not work on any conforming GLSL compiler.

Can someone with another GLSL compiler verify?

In order to be more “portable”, you could check the GL_SHADING_LANGUAGE_VERSION string before inserting the preprocessor directive.

There’s no specified format for log output to begin with + there’s no string type in the spec = hosed, doubly so. Even if there were a specified format you’d be reduced to parsing the log output for the source string index- itself a sheet of black frost for those already treading lightly.

Which then begs the question: what’s the practical use of it in the context of general, platform independent debugging? That NV has taken to offering its own “brand” of GL here and there to give their customers the best possible experience should come as no surprise. This is a flexibility that makes GL lucrative to those who would see its development into the future, IMHO. Expecting the vendors to limit their implementation to the bar set by the spec in each and every case would not only be unrealistic but probably hurtful to GL in the end. Besides, I see nothing in the spec expressly forbidding the inclusion of such a feature.

“O Lord, bless this Thy hand grenade that with it Thou mayest blow Thine enemy to tiny bits, in Thy mercy…"
– Brother Maynard

Expecting the vendors to limit their implementation to the bar set by the spec in each and every case would not only be unrealistic but probably hurtful to GL in the end.

The word “Open” in OpenGL does not mean Open Source. It means Open Specification. That is, anyone can write an implementation; the specification is made public. This allows OpenGL to be a cross-platform standard.

This kind of expansion of functionality, especially since it doesn’t conform to any specification (extension or otherwise), is unacceptable. It works against cross-platform development. And, as you point out, encourages others to implement off-spec behavior.

Besides, I see nothing in the spec expressly forbidding the inclusion of such a feature.

It’s right here:

Emphasis added.

In a specification, everything is forbidden unless it is specifically allowed. There is nothing here that allows “source-string-number” to be a string. And therefore it is forbidden.

>> In a specification, everything is forbidden unless it is specifically allowed.

How do you figure?

If the spec is satisfied - if a source index is accepted - then all is right with the world and we can go happily about our business. Going above and beyond the minimum requirement doesn’t seem to violate any laws on the books that I can find; excepting of course where a departure from the spec would introduce an unspecified error or an omission of one specified, but I don’t see that here.

[/QUOTE]
Ah, so it’s a bug. I am assuming that one or both of you will file a bug on the NVidia drivers to get this fixed. It’s probably just something that slipped through from the Cg compiler their GLSL compiler is based on, which existed before GLSL was even thought of (not some insidious plot to undermine the integrity of GL).

How do you figure?

Because that’s how specifications work. If it says that value X is a Y, then value X cannot also be a Z.

If there is to be behavior outside of the specification, then it must be an extension. Indeed, that’s what extensions are for: to ensure that “extracurricular” behaviors are well and publicly specified. Thus keeping OpenGL open.

>> Because that’s how specifications work.

Where are the specifics on specifications… specified?

Hi all,

I did not want to provoke this discussion.

=> I know this is not portable. Nevertheless, it helped me to save a lot of time. Therefore I wanted to share it. Feel free not to use it at all if you rather look at error message with “(0)” as a “file”, or if you rather parse the output of the GLSL compiler.

=> This is clearly an oversight in the GLSL spec. The GLSL spec should have:

  • supported the #line directive as in the C preprocessor,
  • change wording to something like “where source-string-number is a constant integer expressions … (increment stuff) … or a string”; or
  • standardize error messages so that they can be parsed.

I’m a big fan of standard compliant code, but if all code was standard compliant, you probably would not have any of the programs you are currently using. Non standard compliant code is not a problem as long as you are aware of it.

Yes, #line doesn’t work on nVidia. Seems related to this:

Feed glShaderSource() with three source strings (each terminated by line breaks):
0: #version 110
1: #define TEST
2: void main(void) TRASH { gl_Position = gl_Vertex; }

This is the result on ATI FireGL V3600, Driver 8.683:

> Vertex shader failed to compile with the following errors:
> ERROR: 2:1: error(#132) Syntax error: ‘TRASH’ parse error
> ERROR: error(#273) 1 compilation errors. No code generated

“2:1” obviously means line 1 in source string 2. Well done!

Here is nVidia GTX285, Driver 196.21:

0(3) : error C0000: syntax error, unexpected identifier, expecting ‘,’ or ‘;’ at token “TRASH”
0(3) : error C0501: type name expected at token “TRASH”
0(3) : error C1014: “TRASH” is not a function
(2) : fatal error C9999: *** exception during compilation ***

Questions raised: “0(3)”?? Line 3 in source string 0, I guess. Seems, they simply concatenate all three strings and then compile it as a whole. Not good. And what does that “(2)” mean?

Or am I just stupid and not able to read it right?

CatDog

>> Seems, they simply concatenate all three strings and then compile it as a whole.

Yep. Well, according to an old NV doc, full compilation is deferred to program link time. Don’t know if that’s still true but I’ll assume it is until I hear otherwise.

Deferred compilation explains line numbering confusion?

CatDog

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.