Issues with smooth edges spotlights in OpenGL

OK, I think I found the root of evil now: Clipping. The incorrectly rendered images are all drawn as a resized pattern and the implementation on that is to draw more than needed and then clip the rest. So it looks like my shaders kind of broke clipping, I guess? Do I need to re-implement this? Is this done in the “original” shader program? What do I have to add to make it work again?

If you’re getting repeated copies of the texture, it means that a) your texture coordinates aren’t limited to the range 0…1, and b) the wrapping mode is GL_REPEAT (which is the default).

If you expect either a portion of the texture or the entire texture to be mapped to the polygon, check your texture coordinates. If you expect to see one copy of the texture surrounded by a border, change the wrapping mode with glTexParameter(GL_TEXTURE_WRAP_S) and GL_TEXTURE_WRAP_T.

If this was working before you used a fragment shader, what has changed since then?

I think the problem is that clipping isn’t working anymore. The other textures that do not require clipping to be drawn work fine, but some objects need to fill up space with a certain image pattern and there clipping is used which my shader seems to break. I researched that and found some sources (such as How to activate clip planes via shader? - OpenGL: Basic Coding - Khronos Forums) stating that as custom shaders override the original functionality, clipping has to be re-implemented. It seems like that is in fact the problem but I have no idea how to check if a pixel is being clipped in the fragment shader, none of the links I found included real code examples.

It’s not clear exactly what you mean by “clipping”, since you use the term incorrectly in a couple of places. “Pixels” (I’ll assume you meant “Fragments”) don’t get clipped. Only triangles get clipped.

Custom shaders do override the old user-defined clip plane support, but that doesn’t turn off [i]viewport[/i] clipping. It doesn’t affect viewport clipping at all.

If clipping is indeed some kind of problem, it’s more likely that you did something wrong with your viewport than anything else.

Was your previous code using glClipPlane() and glEnable(GL_CLIP_PLANE0) etc?

If it wasn’t, then the problem has nothing to do with clipping.

If it was, then you’re correct that you need to re-implement this, but it’s the vertex shader which is responsible, not the fragment shader. User clipping planes are still enabled in the same way, but rather than specifying plane coefficients using glClipPlane(), the vertex shader writes the distance inside the clip plane to gl_ClipDistance[i] (possibly using plane coefficients passed via uniforms).

If you aren’t using a vertex shader, then clipping should function as before.

Another thought: if you’re now using the w component of the vertex position for depth, that will interact with clipping.

Awesome, you’re right! I was using glClipPlane and am now using glScissor instead which works fine. Everything now looks like it did before which means I finally can look into lighting. Thank you so much :slight_smile:

So I need your help once more as I am unable to find out why my shader won’t compile. So I created this fragment shader:


...
uniform vec4 lightColor[MAX_LIGHTS];

uniform int minAffectedDepth[MAX_LIGHTS];
uniform int maxAffectedDepth[MAX_LIGHTS];

uniform float intensity[MAX_LIGHTS];
...

void main()
{
        vec4 ownColor = gl_Color;
	vec4 color = vec4(0.0, 0.0, 0.0, 1.0);

	if (isTexture)
	{
		ownColor *= texture2D(texture, gl_TexCoord[0].st);
	}
	
	for (int i = 0; i < numLights; i++)
	{
		if (myDepth >= minAffectedDepth[i] && myDepth <= maxAffectedDepth[i])
		{
			if (isSpotLight[i])
			{
			
			}
			else
			{
				color += ownColor * lightColor[i] * intensity[i]; // <--- this line
			}
		}
	}

	gl_FragColor = color;
}

The shader without the line does what it is supposed to do: turn everything black (no light). But when I add the line


color += ownColor * lightColor[i] * intensity[i];

the shader doesn’t compile anymore and doesn’t even give any error messages.
At first I thought it was because I was using arrays wrong, but the minAffectedDepth[] check works just fine and that is an array access too. What am I doing wrong?

[QUOTE=1337;1265515]the shader doesn’t compile anymore and doesn’t even give any error messages.
At first I thought it was because I was using arrays wrong, but the minAffectedDepth[] check works just fine and that is an array access too. What am I doing wrong?[/QUOTE]

I’m not sure.

But you might be exceeding the maximum number of uniform components in the default uniform block. Uniforms are optimised away if they’re not used, so simply adding the declaration won’t cause compilation to fail even if using it will. If that’s what’s happening, then you’ll need to use an explicit uniform block (glUniformBlockBinding() etc in OpenGL 3.1+) backed by a uniform buffer object.

Also, you need to check the status (and any error messages) for both compilation and linking, as errors may occur at either phase.

I checked the linkage status and it already fails to link the program while the individual compiling of the shaders works fine. Also, the maximum number of uniforms in the fragment shader (according to my OpenGL) is 35657, so I am not exceeding that. Any idea what could be causing the linking error? o.O

EDIT: I just tested whether maybe the error log gets overriden after each step - it does. So here is the linkage error I am getting:

Internal error: assembly compile error for fragment shader at offset 79443:
– error message –
line 1704, column 35: error: invalid local parameter number
line 1710, column 18: error: out of bounds array access
line 1719, column 19: error: out of bounds array access
line 1724, column 15: error: out of bounds array access
line 1725, column 15: error: out of bounds array access
line 1729, column 25: error: offset for relative array access outside supported range
line 1734, column 27: error: offset for relative array access outside supported range
– internal assembly text –
!!NVfp5.0
OPTION NV_shader_atomic_float;

cgc version 3.1.0001, build date Jan 18 2013

command line args:

#vendor NVIDIA Corporation
#version 3.1.0.1
#profile gp5fp
#program main
#semantic numLights
#semantic lightPosition
#semantic lightColor
#semantic minAffectedDepth
#semantic maxAffectedDepth
#semantic isSpotLight
#semantic spotDirection
#semantic spotExponent
#semantic spotCutOff
#semantic spotCosCutOff
#semantic intensity
#semantic constantAttenuation
#semantic linearAttenuation
#semantic quadraticAttenuation
#semantic texture
#semantic isTexture
#semantic myDepth
#var float4 gl_Color : $vin.COLOR0 : COL0 : -1 : 1
#var float4 gl_TexCoord[0] : $vin.TEX0 : TEX0 : -1 : 1
#var float4 gl_TexCoord[1] : : : -1 : 0
#var float4 gl_TexCoord[2] : : : -1 : 0
#var float4 gl_TexCoord[3] : : : -1 : 0
#var float4 gl_TexCoord[4] : : : -1 : 0
#var float4 gl_TexCoord[5] : : : -1 : 0
#var float4 gl_TexCoord[6] : : : -1 : 0
#var float4 gl_TexCoord[7] : : : -1 : 0
#var float4 gl_FragColor : $vout.COLOR : COL0[0] : -1 : 1
#var int numLights : : c[640] : -1 : 1
#var float3 lightPosition[0] : : : -1 : 0
#var float3 lightPosition[1] : : : -1 : 0
#var float3 lightPosition[2] : : : -1 : 0
… and so on with every single array…
#var float quadraticAttenuation[125] : : : -1 : 0
#var float quadraticAttenuation[126] : : : -1 : 0
#var float quadraticAttenuation[127] : : : -1 : 0
#var sampler2D texture : : texunit 0 : -1 : 1
#var bool isTexture : : c[641] : -1 : 1
#var int myDepth : : c[642] : -1 : 1
PARAM c[643] = { program.local[0…642] };
ATTRIB fragment_texcoord[] = { fragment.texcoord[0…0] };
TEMP R0, R1, R2, R3;
TEMP RC, HC;
OUTPUT result_color0 = result.color;
TEXTURE texture0 = texture[0];
MOV.U.CC RC.x, c[641];
MOV.F R0, fragment.color;
MOV.F R1, {0, 1, 0, 0}.xxxy;
IF NE.x;
TEX.F R0, fragment.texcoord[0], texture0, 2D;
MUL.F R0, fragment.color, R0;
ENDIF;
MOV.S R3.x, {0, 0, 0, 0};
REP.S ;
SLT.S R2.x, R3, c[640];
SEQ.U R2.x, -R2, {0, 0, 0, 0};
MOV.U.CC RC.x, -R2;
BRK (GT.x);
MOV.U R2.x, R3;
SLE.S R2.y, c[642].x, c[R2.x + 256].x;
SGE.S R2.x, c[642], c[R2.x + 128];
AND.U.CC HC.x, -R2, -R2.y;
IF NE.x;
MOV.U R2.x, R3;
SEQ.U R2.x, c[R2.x + 384], {0, 0, 0, 0};
MOV.U.CC RC.x, -R2;
IF NE.x;
MOV.U R3.y, R3.x;
MUL.F R2, R0, c[R3.y];
MAD.F R1, R2, c[R3.y + 512].x, R1;
ENDIF;
ENDIF;
ADD.S R3.x, R3, {1, 0, 0, 0};
ENDREP;
MOV.F result_color0, R1;
END

30 instructions, 4 R-regs

(1)

Did I just somehow break OpenGL?

EDIT: I changed MAX_LIGHTS to 64 and it works. Is this an OpenGL bug?
EDIT: Definitely looks like an OpenGL bug now. The magical number seems to be 101 (works fine), everything >=102 = linking error.

100 lights would mean, given your definitions, 700 uniform components (each vec4 counts as 4 components). That’s a lot. This isn’t an “OpenGL bug”. You’re almost certainly exceeding the implementation’s uniform limits, just as GClements suggested; it’s no surprise that the linker chokes on it.

Nor am I surprised that NVIDIA’s multi-layered compiler gives such an obtuse error message for it…

[QUOTE=Alfonse Reinheart;1265524]100 lights would mean, given your definitions, 700 uniform components (each vec4 counts as 4 components). That’s a lot. This isn’t an “OpenGL bug”. You’re almost certainly exceeding the implementation’s uniform limits, just as GClements suggested; it’s no surprise that the linker chokes on it.
[/QUOTE]

Well, GL_MAX_FRAGMENT_UNIFORM_COMPONENTS is 34567 (which is just a little more than 700 ;)) for me, as mentioned earlier. Still seems like a bug.

A limit that large is a bit surprising from NVIDIA hardware since, last I heard, non-block uniforms were actually compiled into the shader executable. But there it is.

So it seems more like their internal Cg compiler can’t handle uniform arrays of that size, since it walked past some internal compiler limit.

You can do as GClements suggested and use a UBO instead of non-block uniforms. You’ll have to work out how to deal with the difficulties of doing that in LWJGL though.

35657 = 0x8B49, which is the value of the enumeration constant GL_MAX_FRAGMENT_UNIFORM_COMPONENTS.

That’s the “key” used to query the limit, not the value of the limit. To obtain the value, you need to call glGetIntegerv() with GL_MAX_FRAGMENT_UNIFORM_COMPONENTS as the first argument ([var]pname[/var]).

Oh. Sorry then, stupid mistake. ;). Still 2048 though. With 101 * 13 uniforms I get 1313 array uniforms + 4 other is 1317 (< 2048). Still smaller than the limit. So is it a bug or am I just being stupid again? :slight_smile:

With 101 * 13 uniforms I get 1313 array uniforms + 4 other is 1317 (< 2048). Still smaller than the limit.

It’s not the number of uniforms; it’s the number of components. One of the arrays you showed us is a vec4, and a vec4 has 4 components. So that would be 4 * 101, just for that array.

Ahhh. Ok that makes sense, thanks for pointing that out. So I was being stupid (again) :wink:

So 4 + 102 (the number where it starts to throw errors) * 20 = 4 + 2040 = 2044. Is that like close enough?

It’s best not to get that close to the limits. Even if you’re technically under them, there is usually a degree of fuzziness in them. For example, I’d guess that many implementations don’t count sampler uniforms, since they use different resources from regular uniforms.

Yeah, it’s technically a driver bug if compilation fails when you’re under the limit, but with you being so close to the edge, they’d probably consider it an edge case and prioritize fixing it appropriately.

I set MAX_LIGHTS to 64 now. With that I should be safe on any platform, right?

OpenGL 2 only requires GL_MAX_FRAGMENT_UNIFORM_COMPONENTS to be at least 64. OpenGL 3 increases the minimum to 1024.

So you should be safe for OpenGL 3 or later, but not for OpenGL 2.

If you need to support OpenGL 2, query the limit and adjust MAX_LIGHTS accordingly.

I just hit another problem I currently have no solution for:

Ambient light works fine but spot lights are giving me trouble, namely the “real” (world coordinates) positions of a given pixel, because currently it seems like they coordinates are scaled wrong (like an additional offset of 1 I added manually in the shader is like 100 pixels). I found several articles and posts about how you need to get the modelMatrix, but none of them actually showed how to do it. This post (How do I get a fragments x,y,z in world coordinates in the fragment shader? - OpenGL - Khronos Forums) suggest that you need to multiply the modelviewmatrix with the inversed viewmatrix, but it doesn’t explain how to get the viewmatrix. There is no glGetFloat for it. How can I solve this issue?

EDIT: Specifically, the vertex-to-fragment-passed coordinates of the


varying vec3 originalPos;

void main()
{
	originalPos = gl_ModelViewMatrix * gl_Vertex;
        ...

are not in the same scale (world? not sure what the word here is) as the light coordinates (which are, I think, normal world coordinates).

Thanks for any kind of help :slight_smile: