I’ve got a bit of a problem. In my OpenGL (3.0 core, GLSL 1.4, context from SDL2) scene I’ve got objects being rendered from the point of view of a moving camera (meaning it can be translated and rotated). As such, in order to get my eye vector I need to translate the vertex position from world space (using a model view matrix) to eye space (using the camera transform.) This eye space vertex position seems to work as the eye vector for my lighting computations, provided I do everything else in eye space.

That’s all well and good, but I’m moving on to environment mapping and have generated a cube map that I wish to compute reflections from. In order to sample a cube map I need to find the reflection of the eye vector off of the surface normal, but I think that I need that reflection in world space. It would be possible to compute this reflection in eye space (by transforming the vertex normal to eye space.) However, I’ve already got a reason to compute my model’s vertex normal in World Space (using inverse(transpose(MV)), so I’d prefer to find the eye vector in world space as well.

Given:
Model View matrix (model space -> world space) MV
Camera matrix (world space -> eye space) C
Model space position m_Pos

How can I get the world space eye vector?

One shoddy solution I have is to find the eye vector and then invert

vec4 w_Pos = MV * m_Pos; // World space position
vec4 e_Eye = -(C * w_Pos); // Eye space eye vec
vec4 w_Eye = inverse(C) * e_Eye; // World space eye vec

but obviously that just gives me the world space position, negated, so nothing changes when I move my camera. For some reason, though, this “sort of” works:

vec4 w_Pos = MV * m_Pos; // World space position
vec4 e_Eye = -(C * w_Pos); // Eye space eye vec
vec3 w_Eye = (mat3(inverse(C)))* e_Eye.xyz; // World space eye vec

My guess is that by making the inverse camera matrix a mat3 I rotate the eye space Eye Vector to World Space while not undoing any translation, but the whole thing feels convoluted and wasteful. Is there something obvious I’m missing? Ideally I’d work out of one basis (i.e transform my light positions to eye space) and work exclusively out of there, but I don’t know if I can get around having this reflection vector in world space. Is it possible to transform the cube OpenGL uses during the texture lookup?

A “model-view” matrix combines both the model (object) and view (camera) transformations, i.e. it transforms directly from object space to eye space. If the view (camera) transformation is separate, the other one is just a model matrix. Their product is the model-view matrix.

vec3 w_eye = inverse(mat3(C * M)) * vec3(0,0,-1);

But you don’t want to be calculating a full matrix inverse for each vertex. It would be better to just upload the inverse model-view matrix (i.e. inverse(C * MV)) as a separate uniform (if you were using the compatibility profile, this is already available as gl_ModelViewMatrixInverse).

Failing that, if the model-view matrix (ignoring the translation) is orthonormal (i.e. consists only of rotations), then its inverse is just its transpose, which is much cheaper to compute. In fact, it’s free, as you can just use:

vec3 w_eye = vec3(0,0,-1) * mat3(C * M);

As (A.B)T = BT.AT, multiplying a row vector by a matrix is equivalent to multiplying the matrix’ transpose by a column vector. GLSL’s matrix-vector multiplication treats a vector as a row vector if it’s on the left and as a column vector if it’s on the right.

But given that two of the vector’s components are zero and the third is -1, the above can be further simplified to

[QUOTE=GClements;1277056]A “model-view” matrix combines both the model (object) and view (camera) transformations, i.e. it transforms directly from object space to eye space. If the view (camera) transformation is separate, the other one is just a model matrix. Their product is the model-view matrix.
[/QUOTE]
Gotcha.

I see. So you’re just taking a “forward facing vector” in eye space and using that inverse transform to bring it to world space (ignoring the translation). That makes sense, but doesn’t that imply that the eye vector is always vec3(0, 0, -1) in eye space? I could see that, but I thought the eye vector leads from the camera position to the point on the surface being rendered (i.e a vertex, when computed in the vertex shader). Is that level of detail unnecessary?

And is it fine to cut off the translation?

Neat! So if the upper left 3x3 matrix of the product C * M boils down to a rotation, I can just transpose it? I’ll have to write that out so I can feel comfortable using it, but that does apply to me, so that’s great news.

Thanks for all your help, by the way. You and these forums have been a great resource, and I really appreciate it.

If you want the vector from a given vertex to the viewpoint, then the world-space position of the viewpoint is the right-hand column of the inverse of the model-view matrix. If the model-view matrix (ignoring the translation is orthonormal, then this vector is given by

mat4 MV = C * M;
vec3 w_eye_pos = -transpose(mat3(MV)) * vec3(MV[3]);

I.e. transpose the upper-left 3x3 and transform the translation component by it.

You can then subtract the world-space vertex position from this to get the eye vector.

A vector representing a direction (rather than a position) has a zero w component, so it’s unaffected by translation.