nVidia 3.x + gl_ClipDistance = ungoodness.

Has anyone had experience using nVidia’s OpenGL3.x implementation and gl_ClipDistance[]? I ask as when I use it in my vertex shaders, both the fragment and vertex shaders compile with no errors but I get the link errors:

error: unknown builtin varying parameter (named gl_ClipDistance[0]) encountered
error: unknown builtin varying parameter (named gl_ClipDistance[1]) encountered
error: unknown builtin varying parameter (named gl_ClipDistance[2]) encountered
error: unknown builtin varying parameter (named gl_ClipDistance[3]) encountered
error: unknown builtin varying parameter (named gl_ClipDistance[4]) encountered
error: unknown builtin varying parameter (named gl_ClipDistance[5]) encountered

this is for shaders with or without geometry shaders, under a 3.x GL context (via the new context creation functions). Happens with or without the forward context flag set, happens in the drivers just released by nVidia (under Linux), hardware: GeForce8700M.

I can post the shader code (beware it is quasi-machine generated and is long), so if someone knows what can make gl_ClipDistance fail or work, advice is appreciated.

gl_ClipDistance[] was introduced with GLSL 1.30 (in favour of the since deprecated gl_ClipVertex).

So do you use the correct version of the shading language (eg put the line “#version 130” in your shaders source) ?

Hardware limitation? What platform are you working on?

at the top of each shader is:

#version 130

moreover, using gl_ClipVertex produces a warning in the vertex compile log that it is deprecated… there is an easy (but dumb way around this issue: pack the values that would be stored in gl_ClipDistance into an out, so if the array size if 8, then I lose 2 out vec4’s, then in the fragment shader discard if any of them is negative)… I don’t like the VS/FS solution but it would work… I wanted to use gl_ClipDistance[] because trying to use gl_ClipVertex in the geoemtry shader made libc freak out in the nVidia driver;

{Toshiba Laptop}
OS: Linux (Ubuntu 8.10)
GPU: GeForce8700M
CPU: Intel Core2 Duo
RAM: 2GB

I’d say it’s simply not yet supported. Did you really expect that, when nVidia announced a “complete OpenGL 3.0 driver” at day one? It’s mostly hot air (and that “compatibiltiy”-extension…).

Jan.

I agree that it should work. However, if you have a moderately complex scene, you will definitely have much lower performance using the discard method since it will overload the rasterizing stage needlessly (creating pixels which will be discarded) and disable the early depth tests done before the pixel shader.

I agree that it should work. However, if you have a moderately complex scene, you will definitely have much lower performance using the discard method since it will overload the rasterizing stage needlessly (creating pixels which will be discarded) and disable the early depth tests done before the pixel shader.

and I totally agree with you, that is why I don’t want to do it; but I did not know that discard killed early z-fail, is that really so? I ask because doing discard does not have any logical affect on early z-fail, only writing to glFragDepth or changing the depth test I thought did; if discard does kill early z-fail, does the alpha test in 2.x kill early z-fail too? {this is important because in 3.x alpha test is gone and you are supposed to do it yourself in your FS).

Both alpha test and discard makes z unpredictable and harder to compress, so yes it does have bad effect on a lot of z test optimizations.

From NVidia GPU Programming Guide :

If Depth or Stencil writes are enabled, or Occlusion Queries are
enabled, and one of the following is true:
• Alpha-test is enabled
• Pixel Shader kills pixels (clip(), texkil, discard)
• Alpha To Coverage is enabled
• SampleMask is not 0xFFFFFFFF (SampleMask is set in
D3D10 using OMSetBlendState and in D3D9 setting the
D3DRS_MULTISAMPLEMASK renderstate)

Other performance documents from AMD, GDC, etc. cover this as well.