Issues with smooth edges spotlights in OpenGL

Any values you specify as uniforms or attributes are passed directly to the shaders without any transformation. The matrices (modelview, projection) are whatever you set them to.

For fixed-function lighting, positions and directions are specified in object (model) space but transformed immediately and stored in eye space. Lighting calculations are performed in eye space.

Thanks for your reply, but I am sorry I still don’t quite understand it. So I pass the lightPosition with just the same world coordinates it has (position.x, position.y, 0.0) and the varying originalPos is set to modelViewMatrix multiplied by gl_Vertex, like this:


...
originalPos = gl_ModelViewMatrix * gl_Vertex;
...

But they somhow seem to be totally different scales and I can’t get any light calculations out of them. I calculate the vec3 lightDirection like this:


...
lightDirection = vec3(lightPosition[i] - originalPos);
distance = length(lightDirection);
...

And I then use its length to calculate the light attenuation, spotEffect and so on. So it really needs to be in the same space, but I don’t know how to achieve that. I understand why it doesn’t work this way, but not how to fix it. It would be awesome if you could clarify that :slight_smile:

[QUOTE=1337;1265564]Thanks for your reply, but I am sorry I still don’t quite understand it. So I pass the lightPosition with just the same world coordinates it has (position.x, position.y, 0.0)
[/QUOTE]
OpenGL doesn’t have world coordinates. It has object coordinates (the values passed via glVertexPointer etc), which are transformed by the model-view matrix to produce eye coordinates, which are transformed by the projection matrix to produce clip coordinates.

At least, that’s the case for the fixed-function pipeline. With shaders, you still have object coordinates (i.e. the original values passed into the shaders via attributes and uniforms) and clip coordinates (the values written to gl_Position). Any other coordinate system is up to the programmer, although it’s common to use eye coordinates for lighting.

In the above code, originalPos will be in eye coordinates, lightPosition[i] will be in whatever coordinate system you choose.

You should probably be transforming the light positions (and spot directions) by the modelview matrix before passing them to the shader via glUniform or whatever. This is how the fixed-function pipeline behaves; glLightfv(GL_POSITION) transforms the given position by the current model-view matrix and stores the resulting eye coordinates for use in subsequent lighting calculations. Similarly for GL_SPOT_DIRECTION (except that the translation component of the matrix is ignored).

Beyond that: if you’re going to be using shaders, you should avoid using the OpenGL matrix functions. Generate the matrices in the application (either using your own code or a library such as GLM) and pass them as uniforms. Reading the legacy matrices out of OpenGL with glGetDoublev(GL_MODELVIEW_MATRIX) etc can have a significant performance cost (the same goes for any glGet* function; you should avoid using any of those in per-frame code).

[QUOTE=GClements;1265566]OpenGL doesn’t have world coordinates. It has object coordinates (the values passed via glVertexPointer etc), which are transformed by the model-view matrix to produce eye coordinates, which are transformed by the projection matrix to produce clip coordinates.

At least, that’s the case for the fixed-function pipeline. With shaders, you still have object coordinates (i.e. the original values passed into the shaders via attributes and uniforms) and clip coordinates (the values written to gl_Position). Any other coordinate system is up to the programmer, although it’s common to use eye coordinates for lighting.

In the above code, originalPos will be in eye coordinates, lightPosition[i] will be in whatever coordinate system you choose.

You should probably be transforming the light positions (and spot directions) by the modelview matrix before passing them to the shader via glUniform or whatever. This is how the fixed-function pipeline behaves; glLightfv(GL_POSITION) transforms the given position by the current model-view matrix and stores the resulting eye coordinates for use in subsequent lighting calculations. Similarly for GL_SPOT_DIRECTION (except that the translation component of the matrix is ignored).

Beyond that: if you’re going to be using shaders, you should avoid using the OpenGL matrix functions. Generate the matrices in the application (either using your own code or a library such as GLM) and pass them as uniforms. Reading the legacy matrices out of OpenGL with glGetDoublev(GL_MODELVIEW_MATRIX) etc can have a significant performance cost (the same goes for any glGet* function; you should avoid using any of those in per-frame code).[/QUOTE]

Thanks for the awesome reply. So, it should work if I just multiply lightPosition with the gl_ModelViewMatrix and they will be in the same coordinate space again?
Like this:


lightDirection = vec3(lightPosition[i] * gl_ModelViewMatrix - originalPos);

But somehow that doesn’t work either.

Yes. Although ideally you’d want to avoid performing the transformation for each vertex. Also, if you change the model-view matrix for each object, the light will move accordingly (i.e. the light will be in a fixed position in object space, which is unlikely to be what you want).

This is why glLight() transforms the position when you set it, not during rendering.

Well, it’s hard to say what’s wrong without more context.

Thanks again, I will provide more context now. I have been trying around for hours and hours and just can’t get it to work at all :frowning:

Also, wouldn’t it mean when just multiplying lightPosition with the gl_ModelViewMatrix solved the problem, that I could just leave it beacuse the originalPosition in the fragmentShader was originally multiplied with the same matrix in the vertex shader too? Confuses me. I tried like every possible combination of multiplying this with that and nothing works.

(Always with varying vec3 originalPos in vertexShader and the light properties (x = 500, y = 500, color = yellow, intensity = 1.0, spotLight = true, spotDirection = 0, angle = 360 (another note: cutoff is angle / 2.0f), spotExponent = 0, linearAttenuation = 0.05) and applied to a 1920 * 1080 white image)

Option 1:

Using the lightPosition as given (like, 500x and 500y, 0z) and the mysteriously transformed originalPos coming from gl_Vertex.

Vertex Shader (both shaders just contain the parts that affect the light in this example, the others dont change and work so far anyway):


originalPos = gl_Vertex;

Fragment Shader:


lightDirection = vec3(lightPosition - originalPos);
distance = length(lightDirection);
spotEffect = dot(normalize(spotDirection[i]), normalize(-lightDirection));
									
if (spotEffect > spotCosCutOff[i])
{							
	spotEffect = pow(spotEffect, spotExponent[i]);
	attenuation = spotEffect / (1.0 + linearAttenuation[i] * distance + quadraticAttenuation[i] * distance * distance);
															
	color += attenuation * lightColor[i] * ownColor * intensity[i];
}

Result:

[ATTACH=CONFIG]997[/ATTACH]

Doesn’t seem too bad, but doesn’t really work either. Light moves with the texture and the 360 degree angle isn’t quite there. Doesn’t make a difference whether I use 180 or 360 as an angle, other angles seem to work fine, spotExponent, direction and attenuation seem too work too. So just the position and the 360 degree thing (should emit light in every direction).

Option 2:

Vertex Shader:


originalPos = gl_ModelViewMatrix * gl_Vertex;

Fragment Shader:


same as above

Result:

Completely dark screen at x500 and y500. At x0 and y0 it looks like this:

[ATTACH=CONFIG]998[/ATTACH]

Weirder than before. With a spotExponent of 1 however we discover that the light source is somehow outside of the screen (even though it should be at x0 y0 now):

[ATTACH=CONFIG]999[/ATTACH]

It also reacts tto position changes and all sorts of factors, though on a totally different scale (1 in x change looks like 100 pixels). 360 degree angle seems to kinda work here, but I can’t be sure because it is offscreen. A spot direction of 180 (angle → x = -1, y = 0) just flips the whole image horizontally, which is totally not what should happen (it should stay the same because spotDirection shouldn’t matter at a 360 degree angle). When changing spotExponent to 0 and linearAttenuation to 0.1 it almost seems like as if a spotExponent of zero flips the light again, now making it seem almost correct, apart from only being halfway there:

[ATTACH=CONFIG]1000[/ATTACH]

But heyyy, the light position now at least seems to be unaffected by the texture position.

Option 3:

Only multiply light position with modelviewmatrix. Original light properties restored.

Vertex Shader:


originalPos = gl_Vertex;

Fragment Shader:


lightDirection = vec3(vec3(vec4(lightPosition[i], 0.0) * gl_ModelViewMatrix) - originalPos); //lightPosition is a vec3 so I have to cast it to vec4 with 0.0 as w to multiply it with gl_ModelViewMatrix and then cast it back to vec3
//the other part remains the same

Result:

[ATTACH=CONFIG]1001[/ATTACH]

The part I marked should demonstrate the distance 500x and 500y now represents. Kind of works too, but not really. 360 degree doesn’t work either. Position is moving with texture. spotDirection and so on works, but 360 degree and 180 are still the same (how can that even happen? isnt the cos of (180 (which is the cutoff; angle / 2.0f)) -1.0? So basically every other cos should be greater than it, right?).

Option 4:

#1 and #2 combined.

Vertex Shader:


originalPos = gl_ModelViewMatrix * gl_Vertex;

Fragment Shader:


lightDirection = vec3(vec3(vec4(lightPosition[i], 0.0) * gl_ModelViewMatrix) - originalPos);

Result:

http://imgur.com/EDDGOXV (can’t add more than 5 images per post)

Yes, it really is the whole screen. With x0 and y0 it looks like this:


Still not really better though.

Anyone any ideas? It would be super awesome if anyone could tell me why it isn’t working, I have been working on this for days now and nothing really works.