Lighting in eye space or in world space?

Hi,

I’d like to know if there is a list of pros and cons for computing lighting in eye or in world space. I’m asking this with OpenGL 3.1 in mind, so the fact the fixed-function hardware performs those computations in eye space is not important.

Thanks,
Thu

Forget everything you’ve heard to the contrary and do your lighting in office space… Yeah, we’re going to need you to go ahead and do you your lighting in eye space, OK? Great… :wink:

Thanks Brolingstanz for this enlightening advice! ^^

Whether in eye space or world space it does not matter.The former means everything in camera space and latter in model space.

If you want to move your light then its better to specify in camera coordinates – assuming you want to move light as you move in a scene – example video games…light on a gun

If going for a general scene …world space is ok.

Thanks Awhig.

And thanks also to Brolingstanz; that was so useful.

All right, a serious response now (so shoot me).

Your world space may have coords large enough lighting in that space would result in precision probs in the GPU. Also, to light in world, you have to pass in your MODELING transform, and we know GL hasn’t got a clue about that. It only knows MODELVIEW. Tell you something? GL doesn’t light in WORLD. It lights in EYE. Also, you’ll need the MODELING inverse transpose to transform normals, in case you’re doing anything whacky like non-uniform scales and shears in MODELING transform.

However, if you light in EYE space, you transform to EYE with MODELVIEW (or MODELVIEW inverse transpose), which GL already knows about.

Also, some lighting/fogging effects depend on the distance from or vector from the eye. In eye-space, that’s just the magnitude of the coord vector, or the coord vector itself. In another space, you have to pass in an eye coord and go transforming it to figure out what your vector is.

Aside from that, light in object space, tangent space, or whatever whacky space you want. The cons for these BTW being that you continually have to back-transform your light coords into that space because the coords aren’t uniform for all objects the entire frame. More GPU work, but sometimes worth it.

As he has GL3.1 in mind, his code can/must provide any of the matrices he’d need in a shader. Inverse matrix computation isn’t cheap, I guess it can’t be more precise, either.

Good point – missed that. So either way, he computes and passes it in.

He never said what kind of lighting he’s doing… could be best in SH space for all we know.
:wink:

I’ve no specific lighting in mind, just the fact that OpenGL pre-3.0 does it in eye-space is irrelevant…

I wonder if someone would mind checking my code below. One is supposed to light in view space, the other in object space. (I think those are the right terms…)

varying vec3 vienon, halfve;
varying vec3 vienon, halfve;

void main()
{	
	vienon = normalize(gl_NormalMatrix * gl_Normal);

	vec3 viepos = vec3(gl_ModelViewMatrix * gl_Vertex);
	halfve = normalize(-normalize(viepos) + normalize(vec3(1.0, 1.0, 1.0)));

	gl_Position = ftransform();
} 


varying vec3 vienon, halfve;

void main()
{
	vec3 n;
	float NdotL, NdotHV;
	n = normalize(vienon);
	NdotL = max(dot(n, normalize(vec3(1.0, 1.0, 1.0))), 0.0);
	NdotHV = max(dot(n, normalize(halfve)), 0.0);
	vec4 difcom = vec4(0.8, 0.8, 0.8, 1.0) * NdotL;
	vec4 specom = pow(NdotHV, 128);
	//vec4 ambcom = vec4(0.2, 0.2, 0.2, 1.0);
	
	vec4 color = 0;		
	//color += ambcom;
	color += specom;
	color += difcom;
	gl_FragColor = color;
}


varying vec3 objnor, halfve;
#version 120

void main()
{	
	objnor = normalize(gl_Normal);
	vec3 viepos = vec3(gl_ModelViewMatrix * gl_Vertex);
	//Convert the light direction into view coordinates, find the half-vector, then convert back again?
	halfve = normalize(vec3(transpose(gl_ModelViewMatrix) * vec4(normalize(-normalize(viepos) + normalize(gl_NormalMatrix * vec3(1.0, 1.0, 1.0))), 1.0)));

	gl_Position = ftransform();
} 


varying vec3 objnor, halfve;

void main()
{
	vec3 n;
	float NdotL, NdotHV;
	n = normalize(objnor);
	NdotL = max(dot(n, normalize(vec3(1.0, 1.0, 1.0))), 0.0);
	NdotHV = max(dot(n, normalize(halfve)), 0.0);
	vec4 difcom = vec4(0.6, 0.6, 0.6, 1.0) * NdotL;
	vec4 specom = pow(NdotHV, 128);
	vec4 ambcom = vec4(0.2, 0.2, 0.2, 1.0);
	
	vec4 color = 0;		
	color += ambcom;
	color += specom;
	color += difcom;
	gl_FragColor = color;
}

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.