camera position in objectspace

Well it’s more cg/math but still also a wee bit openGL so I’ll try asking it here.

I know it’s possible to get the camera position in objectspace from the matrix that transform from object to worldspace in my shaders. (ergo the modelview matrix)

Does anyone know a nice elegant way to do this ?

I have an explenation but I don’t completely understand it, it says:

  • take value 11,12 and 13 from the modelview matrix (this is the translation component)
  • take the opposite of those values

and then… “the inverse transformation is the dot product of the rows of the matrix…”

this last part confuses me.

the modelview matrix transforms object space coordinates to eye space coordinates, not world space coordinates. (There is no world space for opengl).

I am not sure to understand what you need, maybe you don’t understand yourself what you want. I suggest you to read this before:
Opengl transformations

especially the 9.011 paragraph.

hmm you’re right

but in the Cg tutorial they speak about a ModelToWorld matrix which you can get with glstate. from the openGL environment

and to illustrate hat I said above, this is an extract from glsl code from the nvidia 9.5 SDK

    // transform position to world space

    vec4 worldPos = gl_ModelViewMatrix * position; // mul by model

but in the end my question remains the same, I use the guLookAt function to set my camera. And I need that position in objectspace in my vertexshader

Basically what they’re trying to say is that you need to take the translational component of the inverse modelview matrix. IMO the rest of the explanation shows how to calculate the inverse modelview matrix for the most common case (no scaling/skewing).

If you read the link I gave you, you will see that the coordinates that you give to your opengl application (with glVertex* for example) are in the object space. In your vertex shader, gl_Vertex are vertex coordinates in the object space.

gluLookAt just affects the modelview matrix, so the transformation from object space to eye space.

yes I did read it, that’s why I’m confused.

Why do they speak about ‘worldspace’ in the cg tutorial since apparently no such thing exists in openGL. Or are they just using a different name for the same thing?

and I know my vertex position and the normal in my vertex shader are in objectspace. Problem is that i need to calculate the viewdirection (ergo the camera <-> vertexposition vector) in objectspace. And I can eg. query the modelview matrix with glstate.matrix.modelview[0] and transform the objectspace stuff to eyespace and give the coords I use in GluLookAt in a uniform. With those I can get the view direction in eyespace, but I can’t transform back to objectspace.

If you consider the same entity duplicated in your scene, all these entities have different vertices coordinates in the world space, because you have only one origin.
Now in the object space, these entities can actually, be considered as the same entity duplicated in the scene; ie vertices coordinates are given from the object center (or an other reference point) which is the object space origin. Now to draw all the other instance of that object, you just have to move this reference point and draw the entity with the same vertices coordinates. In a world space you have to recalculate all these coordinates.

If you want to compute the view direction vector in object space, you just have to multiply this vector b the inverse modelview matrix which is already computed by opengl in gl_ModelViewMatrixInverse uniform in your shader.


if I have this as setup in openGL

	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb[0].fbobj);



	//glutSolidSphere(1.0, 100,100);


then this would be the correct way to do it in Cg, with eyeposition == those 3 first values in the GluLookAt function

void rgbLightingVertexProgram(
	float4 position		: POSITION,
	float3 normal		: NORMAL,
	out float4 oPosition	: POSITION,
	out float3 objectPos	: TEXCOORD0,
	out float3 oNormal	: TEXCOORD2,
	out float3 I		: TEXCOORD3,
	out float3 R		: TEXCOORD4,

	uniform float3 eyePosition)

		float4x4 ModelToWorld = glstate.matrix.modelview[0];
	float4x4 ModelViewProjection = glstate.matrix.mvp;
	float4x4 ModelViewInverse = glstate.matrix.inverse.modelview[0];

// Calculate clip-space position

	oPosition = mul(ModelViewProjection,position);

	//normal to worldspace/eyespace

	oNormal = mul(ModelToWorld , float4(normal, 0)).xyz; 

//vertex position to worldspace/eyspace
	float3 positionW = mul(ModelToWorld , position).xyz;
	float3 N = normalize(oNormal);
	// Incident direction in worldspace
	Iw = normalize(eyePosition - positionW);

//Incident in objectspace
        I = mul(Iw,ModelViewInverse);

// This should be R in objectspace?
R = reflect(I,normal);

shader is not complete, but I extracted the relevant part

I don’t know Cg but this line is suspicious:

Iw = normalize(eyePosition - positionW);

eyePosition is in the object space. positionW in the eye space. It would not give goof results.

so I’m still not completely comprehending it. I thought that the camera/eye position specified in GluLookAt is in eyeSpace so I thought eyePosition was in eyespace. If it were in object space I’d have to redefine it for every object, no ?

edit: ow those coordinates are indeed in objectspace.

so I could just calculate the incident vector in objectspace by doing I = normalize(eyePosition - position);

seeing as position and eyePosition are both in objectspace.

No! Re-read what I said and what OpenGL FAQ says. the eye position is given by your application in object space, like all objects you define in your scene. The eye is like a camera object that you place in the scene. How could you locate the camera in the scene using its own space to define its coordinates? it’s no sense, because you camera is like a point which is the center of its own object space.

Yes use position, not positionW.

thanks alot I solved the problem with your tips :slight_smile: