How to get camera position in Worldspace??


I made a ogl project and as far as I understand I place my model in the world, have a static camera at 0,0,0 (camera space) and move the world around the camera.

Now I want to place light. When I press 1 the light should be placed at the camera’s position. But the light need to be placed in the world space right? So how do I get the current camera position in worldspace??

your application should know that in the first place, otherwise you wouldnt be able to build the “view matrix” (e.g. with glm::lookAt(…)). everything your scene contains (camera, lights, drawable objects, etc) should be described in “global” / world coordinates

once you want to lit your scene, you can simply upload the light properties as uniforms to the shaders

yeah, but how do i know the world coordinates of my camera, after i rotated and moved my world??

in relation to what ? the universe ?? :slight_smile:

the camera is just a point in space (global space), with a certain rotation

struct Camera {
vec3 Position, Forward, Up;


struct Camera {
vec3 Position;
quat Rotation{1, 0, 0, 0};

and this camera is located anywhere in the “world” / scene:

struct Scene {
Camera camera;
// some objects, lights, etc
vector<Object> Objects;
vector<PointLight> Pointlights;

if you pass that “scene” to your “simulator” (a class), it has all information it needs to accomplish its task (simulating the “world”)
if you pass that “scene” to your renderer (a class), it has all information it needs to accomplish its task (drawing)

an “Object” should have a hint on what model type to draw at its position, could be an int (array index) or a string (map key) etc

i don’t have a camera. Did you ever looked at my code??

I have vertices. Those are multiplied with the model matrix and an normalize matrix (scale to 1 and translate to Center), so they are in worldspace now.
Next i translate and rotate THE WORLD(!!) and finally i multiply with the projection matrix.

Ok but how do i get the camera position in worldspace?? Example:

before doing any movement i move the scene by {0,0,-4} (move the objects to 0,0,-4) So in Worldspace my camera is at 0,0,4 right??

My formula so far is

QMatrix4x4 viewToWorld;
m_light.position = viewToWorld * QVector3D(0,0,0);

But how do i need to apply the rotation?? i can do what ever i want, it never works. Please help me.

Well, you should probably change that.

The transformation chain typically looks like:
projection * view * model
where the view transformation is the inverse of the camera transformation (i.e. the transformation you would use if the camera was an object).

Historically, the way that the view transformation was handled was to concatenate inverse transformations in the reverse order:
(A.B)-1 = B-1.A-1.
So if the camera was at <x,y,z> with a heading of h and an elevation of e, you’d use something like:


But I’d suggest that you’re better off simply constructing a camera transformation using GLM, inverting it, then (assuming that you’re using the fixed-function pipeline) using glLoadMatrix() to initialise the model-view matrix to the inverse of the camera transformation.

If you have an actual (non-inverted) camera transformation, then the camera’s world-space position is simply the result of transforming the vector [0,0,0,1] by the camera transformation (i.e. the right-hand column of the matrix).

If you’re desperate to avoid the use of GLM (or any other matrix code), you can use gluUnProject() with the projection matrix set to any symmetric orthographic projection (e.g. an identity matrix) to un-project a point at the centre of the viewport and with a Z coordinate of 0. IOW, un-project the eye position.

yeah thank you. I got it now :smiley:

QMatrix4x4 rotateBack;
m_light.position = rotateBack * (-m_translation);

don’t know why it didn’t worked that way:

QMatrix4x4 rotateBack;
m_light.position = rotateBack * QVector3D(0,0,0);

btw i prefer Qt instead of glm for one simply reason. Qt is MUCH more powerful than everything else. And i don’t need 3 libraries but just one for everything.

Example: i read the light tutorial and got to that point:

vec3 normal = normalize(transpose(inverse(mat3(model))) * fragNormal);

and while i was thinking about the correct Qt functions, i found this:


And yes it’s just that easy.

I know mostly everything is using glm, glew, glut,… But you should really try Qt (currently i’m searching for an alternative for this but i’m pretty sure it will be very easy, too.

I’ve been meaning to try Qt.

Here’s how I do it, for what it’s worth:

glm::vec3 CameraPos;
glm::mat4 ConversionMatrix;

ConversionMatrix = glm::inverse(View);
CameraPos = (glm::vec3) ConversionMatrix[3];	//Get the camera position from the view matrix.

I take it straight from the View matrix. It’s the last column or row depending on if it’s row or column major. Same for any object matrix. The inversion turns the view matrix into an object matrix for the camera. You have to invert it back to use it as a view again. (Or as in this example I just build another matrix so that I don’t actually mess my view up.) There’s probably a way to avoid the inverse mathematically since that’s going to do an inverse on the entire matrix and you really only need one column or row. You also only need the x,y, and z values and so the conversion to vec3 drops the w value.

I think you can also get a forward vector this way or any of the other five directions for the object, which in this case is the camera, by using different columns or rows. I let the view matrix keep track of my camera from frame to frame and deconstruct it when I need something like it’s position. It avoids gimbal lock among other things.

Looks like pretty close to the same thing; not sure if this is any more efficient than what you’ve already got there.